I was trying to do a memory test to see how far back 3.5 could recall information from previous prompts, but it really doesn’t seem to like making pseudorandom seeds. 😆

  • millie@beehaw.orgOP
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    I haven’t had much luck with it writing stuff from scratch, but it does a great job of helping with debugging and figuring out why complex equations are doing what they’re doing.

    I put together a pretty complex shader recently, and gpt 3.5 did a great job of helping me figure out why it wasn’t doing quite what I wanted.

    I wouldn’t trust it to code anything without my input, but it’s great for advice and explanations and certain kinds of problem solving. Just don’t assume it has the right answer, you still have to do the work

    • jarfil@beehaw.org
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      I’ve tried it with languages I don’t know, and it managed to write simple working functions by just iterating over:

      1. Ask it to write the code
      2. Try to run the code, write down any errors
      3. Look up the errors, and ask it to fix them in the code
      4. Repeat from 2 until there are no more errors

      It seems to lose context easily, like if you ask it to fix one error, then another, it might revert the first fix, but asking it to fix both at once, tends to work.

      I think someone could feasibly write several working functions or modules, without knowing much about a given language, as long as they are clear about what they want them to do… but of course spotting obvious errors and fixing them by hand, can be faster. Fixing integration problems is where I think it might get harder (haven’t tried though, could be interesting).