Hacker Newsnew | past | comments | ask | show | jobs | submit | scratchyone's commentslogin

tbh they really didn't, tinygrad's was clearly a joke response. they were not providing a real uptime target.

I mean more fundamentally, if they have access to even more advanced models than all of us and have this much downtime, does that imply that their models are possibly not so great at software dev?

But yes you're definitely right, it's perhaps more ironic than contradictory.


Agreed. Having some level of human input makes a submission at least meaningful. If the entire repo and all text is generated by an LLM, does it really matter if the human is the one posting the link? It's functionally indistinguishable from automated spam.


For what it's worth, there are modern LLM detectors with extremely low false-positive rates. The tech has advanced quite a bit since the ZeroGPT days. Personally I've gotten very good results from Pangram Labs. Still can't directly ban people though because false positives are always possible.


Are they great at detecting normal prompts that don't try to make the LLM speak non-LLM-ishly? If you make the LLM not use em dashes, "it's not; it's" phrases and similar things, and if you make it make a few mistakes here and there, would it still be detected? My point is that if people aren't trying to hide their LLM use, it might work, otherwise it probably wouldn't. How would a detector tool work against output where the prompt tells the LLM to alter the way it writes? Or if the LLM output is being modified by another LLM specifically designed to mimic certain styles?

Like, why would my comment (or yours, or any other comment) pass or fail the LLM check the I/you/someone else used specific prompts or another LLM to edit the output? It seems like these tools would work on 99.9% of the outputs, but those outputs likely weren't created in an adversarial way.


Is that false-positive rate from your own testing, or the author's claims? What is the source of ground truth?


CARROT has this and it’s amazing! You can “time travel” back as far as you want. Absurdly far, even. I can tell you that it was 20 degrees in my town on Jan 1st, 1940.


same, i run quite a few forked services on my homelab. it's nice to be able to add weird niche features that only i would want. so far, LLMs have been easily able to manage the merge conflicts and issues that can arise.


Looking through the poison you linked, how is it generated? It's interesting in that it seems very similar to real data, unlike previous (and very obvious) markov chain garbage text approaches.


We do not discuss algorithms. This is war. Loose lips sink ships.

We urge you to build and deploy weapons of your own unique design.


[flagged]



Thank you! That's a fascinating paper.


I'm genuinely haunted by these TurboTax ads, I see that download app popup at least 3 times a day when I use Apple News. Truly cannot believe someone at Apple though that was an acceptable user experience for ads.


The era of Tim Apple is definitely different from Jobs; had he seen an ad like that he'd have fired half the company.

But management by metrics means line go up? All is good.


I'm glad that I'm not the only one this is happening to! I don't believe I've ever even expressed interest in using anything Intuit, at least consciously. Perhaps its the accidental download-app dialog that's mimicking engagement?


Second half of this article has signs of AI slop, as confirmed by Pangram:

https://i.imgur.com/gGIAApA.png

Hard to trust an article like this when the legal analysis and suggestions are being outsourced to an LLM.


Not all AI assisted writing is "slop," especially if, as your screenshot shows, significant portions of the article were written by a human. Drawing attention to any and all hints of AI assisted writing is not the public service announcement you think it is.

Are there specific parts of the article which are inaccurate or misleading? If so please say, it would be very interesting and add to the discussion.


I actually think AI-human collaboration is quite beneficial. I have a more fundamental issue that it's just bad writing when you use pure LLM generated text. My general feeling is "why should you expect me to spend my time reading something that you didn't care enough to spend your time writing?"

Also, most of the suggestions provided in the AI generated section are just useless. While I think this law is terrible, the suggestions provided completely contradict what the lawmakers are intending. I'll explain what I mean with some of the suggestions provided.

> Narrow the Scope to Intent, Not the Tool

This is essentially a suggestion to throw out the entire law as written. Sure, but this is meaningless advice to lawmakers.

> Drop Mandatory File Scanning

This is the same suggestion as before but rephrased.

> Exempt Open-Source and Offline Toolchains

This is asking them to create a massive loophole in their own law making it useless. Once again, essentially just asking them to throw out the entire law.

> Add safe harbor for sellers and educators who don’t modify equipment or participate in unlawful manufacture.

Two fundamentally different concepts here jammed into one idea. Do you want to add safe harbor for sellers who don't modify equipment or do you want to throw out the entire law and have it not apply to anybody who doesn't participate in unlawful manufacture? These are very different ideas, it makes no sense to treat them as one cohesive concept.

All of these are signals that not much thought went into this. If a human had used AI for ideas and writing assistance, but participated in the writing process as an active contributor, I think they would have caught things like this. I don't think they would have chosen to make multiple bullet points semantically identical. I think they would have chosen to actually cite specific aspects of the law and propose concrete solutions.

Another example, one of their suggestions is to improve the working groups to add specific members. Genuinely a fairly good idea. Having actually read the law, I would have cited the specific passage, which requires that the working group "SHALL INCLUDE EXPERTS IN ADDITIVE MANUFACTURING TECHNOLOGY, ARTIFICIAL INTELLIGENCE AND DIGITAL SECURITY, FIREARMS REGULATION, PUBLIC SAFETY, CONSUMER PRODUCT SAFETY, AND ANY OTHER RELEVANT DISCIPLINES DETERMINED BY THE DIVISION TO BE NECESSARY TO PERFORM THE FUNCTIONS PRESCRIBED HEREIN." I would question, who do they consider to be experts in additive manufacturing? Why does it seem that the working group will be far more heavily weighed towards policy experts as opposed to 3D printing experts? The article suggests that "standards will default to large vendors" yet there is no evidence here that vendors will be included at all.


> Second half of this article has signs of AI slop, as confirmed by Pangram

The corporation you're citing named "Pangram" cannot confirm anything of the sort. They only make claims, like the ones in your screenshot.

Indeed, this very "citation" of the AI-generated output of Pangram Inc.'s product is a good example of outsourcing work to an LLM without verifying it.


Pangram has extremely high accuracy. While there's no way to prove AI use, it's a very good proxy for that metric. It's obvious to my eyes that the article is written with AI, I supplied Pangram as a citation to convince people such as yourself who didn't notice the AI usage when reading the article.


> Is that true in New York? Maybe it currently requires permits

What are you referring to as "it" here? When OP mentioned getting a gun from "off the street", that's referring to obtaining one illegally, without a provenance chain or any permitting.

If you want to shoot a CEO, its far easier to buy an untraceable gun on the streets (or obtain a non-serialized 80% lower receiver that you drill yourself) rather than an unreliable fully 3D-printed gun.


Ah, I wasn't familiar with "off the street" meaning that, I thought they were saying "go to a store and buy a gun". Thanks!

Is it that easy to acquire even illegal firearms in the US, that you can just walk around in NYC to the shadier streets and find randoms willing to sell them to you?


I can't directly attest to that (never bought an illegal gun) but from my understanding, yes, people have no challenge obtaining illegal guns.

However, you really don't even need to do that. You could just drive across the NY border to a state with looser gun laws, buy one there, shave off the serial number, and bring it back to NY. You could also just steal a gun from one of the many Americans who already own one.

You can also legally buy an unfinished lower receiver in many states (the part of a gun that is typically serialized). Since it's technically unfinished, it doesn't require a serial number. Then you drill a few holes into it and assemble it with off the shelf, also un-serialized gun parts.


> that you can just walk around in NYC to the shadier streets and find randoms willing to sell them to you?

You know someone who knows someone.


I'm not sure if it's still this way but when I was a kid you could buy old guns at rural flea markets or antiques shops. I've never attempted to purchase an illicit firearm, but I can't imagine it's any harder than buying illegal drugs.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: