Gaywallet (they/it)

I’m gay

  • 56 Posts
  • 208 Comments
Joined 3 years ago
cake
Cake day: January 28th, 2022

help-circle
  • The 3D medium had some fantastic art. There were a lot of gimmicks in movies you’d expect, like harold and kumar go to whitecastle (not meant to be a serious movie). But there were also fantastic shots and art direction such as in tron: legacy and prometheus, where 3D provided a much deeper feel of space and made certain shots that much more emotionally resonant and beautiful.

    There were a lot more misses than wins, as most directors saw it as a gimmick, but not everyone did. The folks who thought carefully about how extra dimensions would affect a shot (even when it was done in post rather than shot on 3D cameras) made some wonderful art, and it’s a shame so many folks missed out on it because they weren’t able to see past it as a gimmick either.


  • The quantity of disinformation is irrelevant if people don’t fall for it

    I don’t know about you, but I find it increasingly difficult to find unbiased takes and find myself spending more time digging than I previously did. Because of this I find myself increasingly mislead about things, because the real truth might be so obscured that I need to find an actual academic to parse what information is out there and separate primary source from other mislead individuals.

    Not to say I don’t disagree with your point, I think you make a fair one, but I do believe that the quantity of disinformation is absolutely relevant, especially in an age where not only anyone can share their misinformed belief online, but one where this is increasingly happening by malicious actors as well as AI.




  • Kids have been doing idiotic shit to themselves since the dawn of time. Tik tok or youtube didn’t cause this.

    It’s not about who caused it, it’s about responsibility. The responsibility for making it easy to spread, amplifying the message. Kids in your class is very different from millions of viewers. Even in grade school there’s a chance an adult might see it and stop it from happening or educating the children.

    Ultimately this is an issue of public health and of education. For such a huge company, a $10m fine is practically nothing, especially when they could train their own algorithm to not surface content like this. Or they could have moderation which removes potentially harmful content. Why are you going to bat for a huge company to not have responsibility for content which caused real harm?


  • There is stuff that’s hard to understand and or get the context right and then there’s the holocaust.

    Absolutely. It was a difficult comment to write, because I suspect I fall closer to your opinion than it may seem absent any context of what I’ve spoken on and I don’t want it to seem like I’m arguing in favor of being tolerant of the intolerant (we absolutely should not - punching Nazis is good and correct). The comment is more about how shades of gray do exist even if the stakes are high and our tolerance for these shades of gray should be fairly low.


  • On principle I want to agree with you, and I also think that there’s a point at which someone knows or has done enough that they have responsibility for their beliefs and actions.

    But at the same time, we have to recognize that there is propaganda, that people tell themselves lies to justify things which they then tell as lies to others, that folks sometimes are so busy scrambling to survive that they don’t have time to sit and think through their beliefs and actions. I do genuinely believe there are many folks that are mislead and that there’s a spectrum of how harmful and hateful your actions can be.

    If I think back to my own childhood, for example, there were periods of time where I held and parroted beliefs that were harmful- racist, sexist, and bigoted. I was privileged enough to have time, space, and resources to evaluate these ideas and realize their harm and change my behavior. But there was a period in time where some of these beliefs, if shared online, would have had people rightfully upset. I think there are folks out there who might be accused of sympathizing with bigotry on specific issues because they don’t know any better and I think there are folks out there who need to take responsibility for the bigotry they are causing and have clearly crossed over into being bigoted from being just a sympathizer.

    Elon has clearly crossed that line, long ago. There’s certainly responsibility that comes along with action- devoting resources and spreading hateful speech on a global scale with a platform has real consequences and he deserves all the hate he is receiving. But I do think there are folks out there who haven’t crossed over that line quite yet and that there’s a gray area which some folks exist in where they aren’t causing enough damage or harm where one could be called a sympathizer due to their uneducated, ignorant, or unquestioned beliefs.













  • Any information humanity has ever preserved in any format is worthless

    It’s like this person only just discovered science, lol. Has this person never realized that bias is a thing? There’s a reason we learn to cite our sources, because people need the context of what bias is being shown. Entire civilizations have been erased by people who conquered them, do you really think they didn’t re-write the history of who these people are? Has this person never followed scientific advancement, where people test and validate that results can be reproduced?

    Humans are absolutely gonna human. The author is right to realize that a single source holds a lot less factual accuracy than many sources, but it’s catastrophizing to call it worthless and it ignores how additional information can add to or detract from a particular claim- so long as we examine the biases present in the creation of said information resources.


  • This isn’t just about GPT, of note in the article, one example:

    The AI assistant conducted a Breast Imaging Reporting and Data System (BI-RADS) assessment on each scan. Researchers knew beforehand which mammograms had cancer but set up the AI to provide an incorrect answer for a subset of the scans. When the AI provided an incorrect result, researchers found inexperienced and moderately experienced radiologists dropped their cancer-detecting accuracy from around 80% to about 22%. Very experienced radiologists’ accuracy dropped from nearly 80% to 45%.

    In this case, researchers manually spoiled the results of a non-generative AI designed to highlight areas of interest. Being presented with incorrect information reduced the accuracy of the radiologist. This kind of bias/issue is important to highlight and is of critical importance when we talk about when and how to ethically introduce any form of computerized assistance in healthcare.






  • It’s FUCKING OBVIOUS

    What is obvious to you is not always obvious to others. There are already countless examples of AI being used to do things like sort through applicants for jobs, who gets audited for child protective services, and who can get a visa for a country.

    But it’s also more insidious than that, because the far reaching implications of this bias often cannot be predicted. For example, excluding all gender data from training ended up making sexism worse in this real world example of financial lending assisted by AI and the same was true for apple’s credit card and we even have full-blown articles showing how the removal of data can actually reinforce bias indicating that it’s not just what material is used to train the model but what data is not used or explicitly removed.

    This is so much more complicated than “this is obvious” and there’s a lot of signs pointing towards the need for regulation around AI and ML models being used in places it really matters, such as decision making, until we understand it a lot better.




  • Okay I understand what you are saying now, but I believe that you are conflating two ideas here.

    The first idea is about learning the concepts, and not just the specifics. There’s a difference between memorizing a specific chemical reaction and understanding types of chemical reactions and using that to deduce what a specific chemical reaction would be given two substances. I would not call that intuition, however, as it’s a matter of learning larger patterns, rules, or processes.

    The second idea is about making things happen faster and less consciously. In essence, this is pattern recognition, but in practice it’s a bit more complicated. Playing a piece over and over or shooting a basketball over and over is a rather unique process in that it involves muscle memory (or more accurately it involves specific areas of the brain devoted to motor cortex activation patterns working in sync with sensory systems such as proprioception). Knowing how to declare a variable or the order of operations, on the other hand, is pattern recognition within the context of a specific language or programming languages in general (as a reflection of currently circulating/used programming languages). I would consider both of these (muscle memory and pattern recognition) as aligned with the idea of intuition as you’ve defined it.

    Rote learning is not necessary to understand concepts, but the amount of repetition needed to remember an idea after x period of time is going vary from person to person and how long after you expect someone to remember something. Pattern recognition and muscle memory, however, typically require a higher amount of repetition to sink in, but will also vary depending on person and time between learning and recall.



  • I want to start off by saying that I agree there are aspects of the process which are important and should be learned, but this is more to do with critical thinking and applicable skills than it has to do with the process itself.

    Of note, this part of your reply in particular I believe is somewhat shortsighted

    Cheating, whether using AI or not, is preventing yourself from learning and developing mastery and understanding.

    Using AI to answer a question is not necessarily preventing yourself from learning and developing mastery and understanding. The use of AI is a skill in the same way that any ability to look up information is a skill. But blindly putting information into an AI and copy/pasting the results is very different from using AI as a resource in a similar way one might use a book or an article as a resource. A single scientific study with a finding doesn’t make fact - it provides evidence for fact and must be considered in the context of other available evidence.

    In addition, learning to interact with and use AI is a skill in the same way that learning to interact with and use a phone, or the internet, or an app are all skills. With interaction layers becoming increasingly more abstract (which is normal and good), people need to have skills at each layer in order for processes to exist and for tools be useful to humanity. Most modern tools require people who can operate on different levels with different levels of skill. While computers are an easy example since you are replying on some kind of electronic device which requires everything from chemists to engineers to fabrication specialists and programmers (hardware, software, operating system, etc.) to work, this is true for nearly any human made product in the modern world. Being able to drive a car is a very different skill set than being able to maintain a car, or work on a car, or fabricate parts for a car, or design parts for a car, or design the machinery that manufactures the parts for the car, and so on.

    This is a particularly long winded way of pointing out something that’s always been true - the idea that you should learn how to do math in your head because ‘you won’t always have a calculator’ or that the idea that you need to understand how to do the problem in your head or how the calculator is working to understand the material is a false one and it’s one that erases the complexity of modern life. Practicing the process helps you learn a specific skill in a specific context and people who make use of existing systems to bypass the need of having that skill are not better or worse - they are simply training a different skill. The means by which they bypass the process is extremely important - they could give it no thought at all or they may critically think about it and devise a process which still pays attention to the underlying process without fully understanding how to replicate it. The difference in approach is important, and in the context of learning it’s important to experiment and learn critical thinking skills to make a decision of where you wish to have that additional mastery and what level of abstraction you are comfortable with and care about interacting with.