If I get it minimally working (aka basic HTTP request and headers able to export), I will upload it to GitHub and reply to you with a link.
If I get it minimally working (aka basic HTTP request and headers able to export), I will upload it to GitHub and reply to you with a link.
Yeah, I’m annoyed by this as I’m looking to script a rudimentary Bruno->postman tool, so I won’t be blocked at work on Monday. means I need to dig into their tooling.
they have an internal bru2json method that is used when exporting a collection into a single file, so I wonder what the benefit is keeping it in the proprietary format at all. maybe it makes it a bit easier to edit by hand, which is a supported use case, but there’s JSON tooling to enable good autocompletes/schemas iirc
EDIT: I has made script (very wip) https://github.com/wtpisaac/bruno2postman
I saw that too, but I couldn’t tell if it was a community or corporate backed thing; I also don’t like that it’s only available through a browser (I know Bruno is Electron, but having a separate desktop app is nice to me)
At minimum Hoppscotch sells some kind of Enterprise Edition https://docs.hoppscotch.io/documentation/self-host/enterprise-edition/getting-started
I don’t know the details, but I’ve just gotten burned too much. Bruno seems genuinely fully libre, no bs, so I’m hoping that it gets more traction.
No problem. Trying to raise awareness of this tool bc Insomnia totally screwed me up at work today.
I read the article, and stand by my statement - “AI” does not apply to self driving cars the same way as robotics use by law enforcement. These are two separate categories of problems where I don’t see how some unified frustration at AI or robotics applies.
Self driving cars have issues because the machine learning algorithms used to train them are not sufficient to navigate the complexities of roads, and there is no human fallback. (See: autopilot)
Robotics use by law enforcement has issues because it removes a human factor to enforcement, which has concerns of whether any deadly force is ever justified when used (does a suspect pose a danger to any officer if there is no human contact?), and worries of dehumanization exist here, as well as other factors like data collection. These aren’t even self driving mostly, from what I understand law enforcement remote pilots them.
these are separate problem spaces and aren’t deadly in the same ways, aren’t unattractive in the same ways, and should be treated and analyzed as distinct problems. by reducing to “AI” and “robots” you create a problem that makes sense only to the technically uninclined, and blurs any meaningful discussion about the precisions of each issue.
This just feels like non-technical fear mongering. Frankly, the term “AI” is just way too overused for any of this to be useful - Autopilot, manufacturing robots, and ChatGPT are all distinct systems that have their own concerns, tradeoffs, regulatory issues, etc. and trying to lump them together reduces the capacity for discussion down to a single (not very useful, imo) take
editing for clarity: I’m for discussion of more regulation and caution, but conflating tons of disparate technologies still imo muddies the waters of public discussion
“changes to improve” increase profit margins
“your overall experience” sadism
actually though how do you justify charging for your normal bloody unaltered logo lmfao
fuck spez, viva la fediverse
here you go: https://github.com/wtpisaac/bruno2postman