Hacker Newsnew | past | comments | ask | show | jobs | submit | weiliddat's commentslogin

Yes, but they wrote it’s for a demo and it’s fine if they lost the last few seconds in the event of unexpected system shutdown.

And also in prod, etcd recommends you run with SSDs to minimize variance of fsync/write latencies


Getting into an inconsistent state does not just mean “losing a few seconds”.

How would you get into an inconsistent state based on an fsync change?

Edit: I meant what sequence of events would cause etcd to go into an inconsistent state when fsync is working this way


data corruption, since fsync on the host is essentially a noop. The VM fs thinks data is persistent on disk, but it’s not - the pod running on the VM thinks the same …

Yeah reminds me of early Bootstrap


Ian wrote a lot of in-depth technical reviews and articles at Anandtech. He’s not a nobody.

https://archive.is/2022.02.18-161603/https://www.anandtech.c...


How did you arrive at that calculation?

For an average day (incl. "non-working" hours) the brain uses far less at ~300Wh, and if you include the body the average person needs ~2.3 kWh.

In the rest of the article, they estimate inference electricity use is only ~8% of overall datacenter use, and if we think of the datacenter as the "body" in which the GPU / brain wouldn't work without, that's an overall median use of ~16 kWh for only 24 Claude Code requests.

I'm more impressed with the human brain's energy efficiency + multimodality + long term context + malleability than anything after using LLMs a bunch, even though I learnt a lot about that in a neuropsych course long time ago.


Yeah PPD is more useful, although for ultrawide I’ve also heard it’s common to have it closer than regular viewing distance, so that you can glance at side screens / information


Not the OP, but I’ve been thinking about why LLMs feel different and I think it’s closer to the chair analogy than I initially thought. Not able to fully articulate it but here’s my try.

Conventionally programming software needed you to know your tools like language, framework, OS, etc. pretty well. There’s a divergent set of solutions dependent on your needs and the craftsmen (programmers/engineers) you went to. Many variables you needed to know to produce something useful. You need to know your raw materials, like your wood.

With LLMs it’s weirdly convergent. Now there’s so many ways to get the same thing because you just have to ask with language. It’s like mass produced furniture because it’s the most common patterns and solutions it’s been trained on. Like someone took all the wood in the world, ran it through some crazy processing, and now you are just the assembler of IKEA like pieces that mostly look the same.

There’s a lost in necessity in craft. It helps to know the underlying craft, but it’s been industrialized and most people would be happy enough with that convergent solution.


I’m using Amp as my main coding agent (outside of work), and they have free mode with ads. It used to be that free mode let you used it with lower cost models like GLM, Kimi K2, etc. but recently they they switched that to a daily $10 limit but with Opus 4.5.

Was curious whether ads would cover the cost of inference, so a bit of napkin math.

They seem to display ~3 ads per minute, on tech products, presumably with pretty good signal and intent based on recent chat history. Not the most up to date on CPM but based on some basic searches we assume $30 per thousand impressions, that’s about 9c per minute and ~$5 per hour. Of course users aren’t always looking at the agent coding, but averaged out over say 3 hours of usage per day, that kinda covers the cost. The $10 per day limit is probably related to average daily session use.

On ChatGPT showing an ad per conversation with good signal and intent audiences could have pretty high CPM or CPC too, easily $0.01 to $0.10 per conversation? I think that’s easily sufficient to cover the API pricing for ChatGPT 5.2 instant or mini thinking for the majority of users queries.


I feel similarly. Sounds a bit like other crafts that were later industrialized and (partially) mechanized, like woodworking and carpentry.

One can certainly enjoy the laborious handcrafted process of building your own table, and yet go back to a shop that churns out cheap furniture that’s nonetheless useful for many others, and see the value in both.

Obviously there’s more degrees of freedom in software, but I’m trying to see it that way to rationalize how I’m feeling with the current state of things.


I see a lot of people saying things like, if you don't use LLMs for coding then you are slower than your peers who do and you are are obsolete, not worth working with, etc., etc.

I am curious where this train of thought has come from. For decades, there have been

a) some human programmers who worked faster than others

b) other non-LLM programming tools that enabled productivity boosts

While maybe (maybe?) slower programmers aspired to be faster, and maybe (maybe?) their managers wished they were faster, it was generally accepted. You might have someone who codes slowly, or medium, or fast, or superhuman fast. Just so long as they were "fast enough" for your business needs, it wasn't a huge issue. There wasn't a culture of shaming all of the medium-speed programmers into the ground.

And while maybe some programming tools did indeed offer productivity boosts, there really wasn't a culture of demanding all possible boosting tools be used. If you want to use Nano to edit, great, up to you. Emacs? Great. Visual Studio Code? Great. Debuggers? Great. Print statements? Great. Outside of some highbrow programmer ivory towers, there wasn't a widespread insistence that "only these tools shall be used because they let you write code the fastest".

But now it seems that there is demand that LLMs be used to write code as fast as possible as possible.

Why? Why now, is it suddenly important that code be written as fast as possible, and anything less should be mocked and derided?


It's because the velocity difference is no longer 25% between the slower and faster programmers.


I think 25% is a low estimate. Using a proper programming editor alone could realistically offer 2x or more productivity over a basic text editor, and there have definitely been programmers who stayed with basic editors.

And I have first hand seen programming teams where there was clearly more than a 25% difference — some could code much, and some could barely code at all.

I think it would be quite fair to say that, between tools and individual skill, there could easily be a 5x speed difference between slower and faster programmers, maybe more. Granted, LLMs are even faster, but I don’t think a 5x potential speed up was a slouch.


What I've seen is that the productive developers are the ones who understand what problem they are solving. They either take the time to think it through or just have an seemingly uncanny ability to see right to the heart of the problem. No false starts, no playing with different implementations. They write the code, it's efficient, and it works.

The slow developers have false starts, have to rework their code as they discover edge cases they didn't think about before, or get all the way through and realize they've solved the wrong problem altogether, or it's too slow at production scale.


That sounds like more sign of recent times.

FOSS software that many rely on that has been around for a while were non-VC: VCS, Linux / GNU / BSD, web browsers, various programming languages, various databases...


Many of your examples came from people who were funded by Universities in the 80s, which was basically the VC of the time. And in the 90s, a lot of the core committers of those projects were already working at VC funded companies.

Back then it was very normal to get VC funding and then hire the core committers of your most important open source software and pay them to keep working on it. I worked at Sendmail in the 90s and we had Sendmail committers (obviously) but also BSD core devs and linux core devs on staff. We also had IETF members on staff.

And we weren't unique, this happened a lot.


Thanks for the insight and history. Glad to be corrected.

Was it in a different nature to current VC funded FOSS though? It sounds like their contributions to FOSS was tangential and not the sold product?

Maybe a bit more like Google and Chrome?


> FOSS software that many rely on that has been around for a while were non-VC: VCS, Linux / GNU / BSD, web browsers, various programming languages, various databases...

It's honestly hard to pick a pattern out for older open source project contributions. PostgreSQL started at UC Berkeley but people contributed to it from all over. Key engineers like Tom Lane worked a number of companies in the database field, some dependent on VC funding, some not. He's currently at Snowflake. [0] A lot of recent innovation around PostgreSQL today (Neon, Supabase, etc.) is VC funded.

That pattern changed with projects like Hadoop, which was about the time that VC funds recognized a standard playbook around monetizing open source. [1]

[0] https://en.wikipedia.org/wiki/Tom_Lane_(computer_scientist)

[1] https://en.wikipedia.org/wiki/Cloudera


Sure, those projects were un(der)funded in the 80s and 90s but the reason we talk about them today is because of the huge amount of investment - both direct and in kind - that VC backed companies have managed to give to many of them.

I think it’s easy to forget how long ago it was when FOSS truly was the outsider and wouldn’t be touched by most companies.

Mozilla/Firefox started in 1998 and then started taking ad revenue from Google in 2005, which pays for a large chunk of its development. It’s been part of the Silicon Valley money machine for 20 years, most of its existence.


IME strato is OK for cheap VPSes as long as you don’t have high expectations.

5-6 years ago, I remember trying to scale up to the 16/32GB tier was miserable and oversubscribed; moving to Hetzner at the same price point brought a huge performance boost (mostly on CPU and disk access speed).


Don’t want to ding strato exclusively, likely the case for most $1-$4/month type hosting.

Not bad as long as you know what you’re getting yourself into.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: