• 0 Posts
  • 126 Comments
Joined 1 year ago
cake
Cake day: June 11th, 2023

help-circle
  • It was far less bad than people make it out to be. I was on a stream watching and so many comments were talking about how he looked, and I’m sitting here thinking… y’all realize he’s listening to Trump speak right? Anyone actually listening to what that monster has to say is going to either look befuddled or dismayed. He looked both. He definitely had some weak spots, but compared to Trump who wouldn’t even answer a question and blatantly lying every other second.

    It sucks. People were basically cheering him on online, the most against him comments I saw were “they’re both so old”. Not commenting on the insanity or the racism or the lies, just memeing on old Biden. Which yeah he deserves it but the rhetoric is reminding me of 2016 and it does not inspire hope.


  • This is pretty much the only way that I use AI. It can brainstorm 50 ideas faster than I can and format them in a way that I can actually get started on projects rather than planning out each step.

    AI is pretty strong at what I have been calling “permanent facts”. Using any song as an example, it will always have the same key, tempo, scales, etc. As such, when asking for details about a song, listing out the key, scales, tempo, and asking it to show unconventional scales that will play over it. Another example of a permanent fact would be the death date of someone, as that isn’t really going to be changing.

    On the other hand, temporary facts are where hallucination and other inaccuracies come in. There’s no way for LLM’s to get new information, so it doesn’t know about career changes, current ages or net worth. You can utilize permanent facts to get accurate information about temporary facts, but that’s not nearly as useful. I think one of the major issues people have with LLM’s (model creation aside) is that our society really values temporary facts, and so when it gets it wrong people like to point at that as a fault. Which it certainly is, but to me it’s kind of like pointing at Photoshop and laughing that it can’t even be used to write a book - like, OK but that’s not really it’s purpose?

    I think another example of LLM’s definitely being useful was all of those privacy nightmare Excel/Sheets plugins. Privacy aside, that’s basically the ideal use-case for LLM’s as you are pointing out Permanent Facts (the data in cells A-Z) and having it sort them in some fashion. I’ve seen a lot of LLM hallucinations for sure, but I’ve also seen a lot of consistency when actually using it as intended. I’ve yet to have it be “wrong” when I was testing my music information template or when sorting out data in excel.

    Much outside of that though, no. It’s only useful as getting mass amounts of theory in a short session, not so much for being reliable in that information. That might sound like a bad tool, but as mentioned it has plenty of use-cases, people are just using it as a tool very, very poorly. (It can also be used maliciously more easily than most other tools, which definitely prohibits its status as a “good” tool.)





  • Yeah contrary to all the negativity about this in this thread, I think there’s a lot of worthwhile reasons for this that aren’t centered on fawning over the loss of a love one. Think of how many family recipes could be preserved. Think of the stories that you can be retold in 10 years. Think of the little things that you’d easily forget as time passes. These are all ways of keeping someone with us without making their death the main focus.

    Yes, death and moving on are a part of life, we also always say to keep people alive in our hearts. I think there are plenty of ways to keep people around us alive without having them present, I don’t think an AI version of someone is inherently keeping your spirit from continuing on, nor is it inherently keeping your loved one from living in the moment.

    Also I can’t help but think of the Star Trek computer but with this. When I was young I had a close gaming friend who we lost too soon, he was very much an announcer personality. He would have been perfect for being my voice assistant, and would have thought it to be hilarious.

    Anyway, I definitely see plenty of downsides, don’t get me wrong. The potential for someone to wallow with this is high. I also think there’s quite a few upsides as mentioned – they aren’t ephemeral, but I think it’s somewhat fair to pick and choose good memories to pass down to remember. Quite a few old philosophical advents coming to fruition with tech these days.




  • Energy restrictions actually could be pretty easily worked around using analog converting methods. Otherwise I agree completely though, and what’s the point of using energy on useless tools. There’s so many great things that AI is and can be used for, but of course like anything exploitable whatever is “for the people” is some amalgamation of extracting our dollars.

    The funny part to me is that like mentioned “beautiful” AI cabins that are clearly fake – there’s this weird dichotomy of people just not caring/too ignorant to notice the poor details, but at the same time so many generative AI tools are specifically being used to remove imperfection during the editing process. And that in itself is something that’s too bad, I’m definitely guilty of aiming for “the perfect composition” but sometimes nature and timing forces your hand which makes the piece ephemeral in a unique way. Shadows are going to exist, background subjects are going to exist.

    The current state of marketed AI is selling the promise of perfection, something that’s been getting sold for years already. Just now it’s far easier to pump out scam material with these tools, something that gets easier with each advancement in these sorts of technologies, and now with more environmental harm than just a victim of a predator.

    It really sucks being an optimist sometimes.






  • Oh.

    I’m not Transphobic I just hate Butcher-surgeons And castrators But that’s Just Me.

    His twitter.

    Here’s a nice article that is freely available, it has plenty of examples. As does his aforementioned Twitter.

    https://oncanadaproject.ca/blog/jordan-peterson-is-the-worst

    Frankly, it’s alarming that you have apparently had to ask multiple people this question:

    I’ve asked people who criticize him to quote any passage, even as short as a single sentence, either uttered or written by him that they consider wrong.

    And yet all you ever needed to do was take a glance at his twitter to see the vile things he has said. And these are just a few from a literal 5 minute search, because if you actually listen or read what he’s saying, it is clear what he is saying. He is a sad, angry man who promotes hate under the guise of “self-help”. "Did you know that taking care of yourself is good for you? Oh and by the way,

    So here’s another to really hammer it in: https://www.axios.com/2022/08/02/youtube-demonetized-jordan-peterson-videos which even links the video where he says it himself, so you don’t have to take someone else writing what he said to believe it.


  • These are never the sort of answers I would want to ask AI for anyway (not a slight against your example, this is a common thing I see).

    @u_tamtam@programming.dev

    I also haven’t seen any practical advantage to using LLM prompts vs. traditional search engines in the general case:

    For general temporary facts I would agree. Even Amazon’s surmized reviews, it can be handy to know that “Adhesive issues” is commonly sighted… but I’d learn that from reading the reviews anyway… Like, a lot of the time it comes down to AI being used when the human should do their own due diligence. I will even admit to this in the very next paragraph.

    I find AI to be especially good at things I am not, like math. I am very good at estimations, and I can work out some stuff over time. However, I am much slower compared to asking “I currently make 2.1-Z a month and I have 397-Z earning that interest. I would like to make 65-Z a month, how much do I need earning interest to make that?” (Roughly 13,100 btw) and getting that answer along with the formula showing its work. It spits out the answer in the amount of time it took me to work out that verbal question, both of which were far faster than the time it takes me to pull up a calculator and do the same math. It’s not that I can’t, it just takes a lot of time that could be better spent actually doing the thing I want to do, which is how many months based off what I earn will it take to reach that number.

    Similarly, this reigns true for a lot of things with “facts.” Perpetual facts or immutable facts are the best use for AI. In my opinion based on experience, of course.

    A fact about a song will always be in the key it was created in. A key will always have a specific set of scales that can be used with it. Math will always be the answer to an equation. These are, for the most part, immutable facts. A person on the other hand, will not always be their age, or even living, nor will their net worth stay the same. Let’s not even get started on the weather! These are temporary facts.

    Quite a few people tend to ask AI temporary facts (rightfully so, it’s what we would like to do on a day to day basis for casual questions), but and it gets a lot of flack for not doing a great job at it (again rightfully so since it’s a basic question.) But I have found that AI is actually quite strong at perpetual facts. When time is short and at the end of the day I just want to jam to my favorite songs, I can get a quick reminder of the key and scales I can use to play along with. On my own I know and can remember these things, but asking a question and getting an answer possibly even faster is really nice.

    Not to be pro-AI – In this case I really think it comes down to using the tool you have. We live in the present and the future, so it seems ridiculous to rely on something trained on data rooted in the past and expecting that it will always be that. Hence, immutable facts tending to be more reliable to work with when using AI.

    I like tech, so I have used and played with local LLM’s and Stable Diffusion models and worked on a model based on my own art of Zentangles, I don’t think I would ever actively rely on this technology for anything more than cursory fun when I’m short on time and energy, or as a supplement to something that I, frankly, am going to take far too long to learn and will forget in the span of a couple months when I no longer need it. I don’t exactly feel the need to memorize the 300,000 Excel sheet tricks, but I will sure as shit ask BarGemeni about it. Using it to confirm my estimations to see that I was roughly accurate compared to an AI that is roughly accurate is good enough for me for some quick and dirty math.

    Ultimately that’s what the LLM-AI debate is for me. Relying on it for anything that is ever changing, using it for anything more than just basic fun is setting yourself up for a bad time. Using it here and there as a calculator or for some non-important details about something that has remained static since the dawn of time? You can net yourself some pretty nice futuristic “Hell yeah’s”. Packing these things up into little boxes like supplanting a phone (or adding it to your phone), using it to create non-existent support (both support staff and supporting terrible products to trick people into buying it), or adding it to rice cookers and refrigerators is… the direction expected but not the one I was hoping for.


  • Taking his ethics and actions out of the equation for a second – I would have no issues with his businesses weren’t scamming states out of legitimate transportation and fucking with people just because he could.

    While dangerous, I’m not really against the idea of selling flamethrowers, kind of. It is kind of the American right, which may be dumb, but fuck if I have anything to say about it. And while it produces a lot of space junk, I’m not against Starlink or SpaceX. especially the former since it does do a lot of good. Coverage in the middle of the U.S. is not good, and anything more is good.

    Ultimately what it comes down to is the fact that the more money tends to side on less regulation, and reintroducing ethics and actions into the mix he is abusing that. The flamethrower ploy could have been snark against the United States for not having regulation on that (if it were something that were actually important, that may have mattered…), and similarly the Hyperloop scheme could have been some form of commentary on how easy it is for a billionaire to manipulate voters with obvious pipe-dreams, then gone ahead with the high speed train plan.

    Instead, he gets butthurt and lashes out. I know we’re on the same page, if anything I’m disappointed specifically because he is in a position to be doing a lot of good, has convinced some people that he is.