Tixanou@lemmy.world to Lemmy Shitpost@lemmy.world · edit-26 months agotrashmessage-square66fedilinkarrow-up1867arrow-down110
arrow-up1857arrow-down1message-squaretrashTixanou@lemmy.world to Lemmy Shitpost@lemmy.world · edit-26 months agomessage-square66fedilink
minus-squarekate@lemmy.uhhoh.comlinkfedilinkEnglisharrow-up5arrow-down1·7 months agoShould an LLM try to distinguish satire? Half of lemmy users can’t even do that
minus-squareKevonLooney@lemm.eelinkfedilinkarrow-up8·7 months agoDo you just take what people say on here as fact? That’s the problem, people are taking LLM results as fact.
minus-squareBakerBagel@midwest.sociallinkfedilinkarrow-up3·7 months agoIt should if you are gonna feed it satire to learn from
minus-squarexavier666@lemm.eelinkfedilinkEnglisharrow-up1·7 months agoSarcasm detection is a very hard problem in NLP to be fair
minus-squareancap shark@lemmy.todaylinkfedilinkarrow-up1arrow-down1·7 months agoIf it’s being used to give the definite answer of a search, so it should. If it can, than it shouldn’t be used for that
Should an LLM try to distinguish satire? Half of lemmy users can’t even do that
Do you just take what people say on here as fact? That’s the problem, people are taking LLM results as fact.
It should if you are gonna feed it satire to learn from
Sarcasm detection is a very hard problem in NLP to be fair
If it’s being used to give the definite answer of a search, so it should. If it can, than it shouldn’t be used for that