Hacker Newsnew | past | comments | ask | show | jobs | submit | duskwuff's commentslogin

The source code to Chromatron was made available on the game's forum at one point. Here's a copy I saved:

https://gist.github.com/duskwuff/513e2b4f38b3db2e060c8611ebf...

A couple of helper functions are missing, but nothing terribly important.


I wasn't aware it existed - and thanks for saving!

And wow, this code is beautiful.


That's definitely a pattern which I've seen in some LLM output, especially when users let a LLM "run away" with an idea and write a lot of text without supervision. The drive to coin names for things feels almost characteristic of self-help or lifestyle advice writing.

This seems like a huge risk factor for users who are at risk for schizophrenia - if someone is using the LLM as an "AI companion", the model is likely to reinforce, or even suggest, illusory connections between events or experiences the user has described in their conversations.

IIRC, it's well documented that negative instructions tend to be ineffective - possibly through some sort of LLM analogue to the "pink elephant paradox", or simply because the language models are unable to recognize clichés until they've already been generated.

That was definitely true with early LLMs but I don't know if that's still the case. Certainly not as strong as it used to be. I think now most negative instructions are followed quite well but there's still a few things that must be deeply embedded from pretaining that are harder to avoid - these specific annoying phrasings, for example.

Both pink elephant effect and accuracy drop on negative instructions are pretty fundamental biases for both humans and LLMs. It impossible to get rid of them entirely, only mitigate them to an acceptable degree. Empirically, the only way to make a model reliable at harder negative instructions is CoT, especially a self-reflection type CoT (write a reply, verify its correctness, output a fixed version). If the native CoT fails to notice the thing that needs to be verified and you don't have the custom one or a verification loop, you're out of luck.

And that was an extremely common repair on the system - possibly the most common one. Early-2000s laptop hard disks were pretty fragile; one bad fall could easily crash the drive.

I think iFixit might also be wrong about the Airport card. IIRC, those were socketed for regulatory reasons, not so that you could install your own.


That's from the prequel, A Deepness in the Sky. (Which is also excellent.)

Deepness in the Sky is probably the first Sci Fi alien I read who didn't feel like a human wearing an alien suit.

Fantasy sometimes does this better but usually with specific tropes.


If you liked that and you haven't read it yet, give "Dragon's Egg" by Robert L. Forward a read.

It's well understood that external stimuli can trigger mental health issues; for instance, the defining characteristic of PTSD is that it's caused by exposure to a traumatic event or environment. It shouldn't be at all unreasonable to suggest that exposure to other stimuli - even just interacting with an AI chatbot - could have adverse effects on mental health as well.

"Imperfect" is when your AI model tells the user that there are two Rs in "strawberry", or that they should use glue to keep the cheese from falling off their pizza. Repeatedly encouraging the user to kill themself so that they can meet the AI model in the afterlife is on quite another level.

Also: they're advertising the degree of improvement ("4x faster"), not an absolute level of performance.

That's hard to reconcile with actions like issuing DMCA takedowns on videos of the game (or even Discord messages which mention it). If fewer people know a game exists, there's less of a market for copies of it.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: