Hacker Newsnew | past | comments | ask | show | jobs | submit | aeternum's commentslogin

The most fitting method would be to be to train an LLM on the Claude Code source-code (among other data).

Then use Anthropic's own argument that LLM output is original work and thus not subject to copyright.


At current launch numbers it may not be worth 1.5+ trillion but valuations aren't about current, they're about discounted future cash flows.

It seems logical that there could/will be far more demand for launch if the price were lower. Prices are quite extreme currently, a standard 3U cubesat (loaf of bread size) is $300k and that's just for orbit.

There could be lots of startups that want to try robotic space mining but launch costs just make that mostly impossible currently so there are only a select few. It's like valuing the Dutch East India company based on the trade volumes in 1603. Of course people are not going to be buying much pepper or nutmeg if it costs them weeks of labor, but build lots of reusable ships, and with each voyage, more people can afford your pepper and nutmeg until it's a common household item.


> about discounted future cash flows.

discounted future cash flows is discounted by risk. There is a lot of risk on growing future revenue is the point.

>seems logical that there could/will be far more demand for launch if the price were lower.

This thesis hasn't played out much in the 10 years since Falcon landed in first 2015.

The non Starlink component of revenue has not massively grown beyond what size the market in 2015 to today. SpaceX isn't lowering launch price to induce demand beyond out being the cheapest just by enough, they would be going lower if cost was the only barrier for more revenue.

It not that businesses aren't possible there at lower launch prices. Starlink is testament that it is.

The problem is that rest of the world is not able to innovate fast enough to take advantage of it even after 10 years. The industry struggles with things like manufacturing satellites at scale or raising money for it, or executing on innovation etc.

What that means for SpaceX is that even if launch costs are cheaper than now, the launch market simply may not grow quick enough for the valuation number to make sense. They would need to enter a lot of new markets directly and be their own launch customer beyond Starlink. This comes with its own set of execution, regulatory and other risks. The data-center[1] in space play is an attempt to do this.

Either DC play or something else, they will need to find and sustain a large business to grow, maybe they will, maybe not.

It is not very clear now and that is a lot of risk so any future cash flow projection has to be discounted heavily.

---

[1] I am not qualified to comment on the technical feasibility, however to analyze the company finances that is not needed, it is just one more risk factor, depending on how you feel you can assign 0 or 1 or anything in between.


> This thesis hasn't played out much in the 10 years since Falcon landed in first 2015.

It did play out: there are many more launches today, it's 5x in 20 years. The 75% of SpaceX starlink launches (which account for nearly 50% of all launches) were quietly financed by their other launch customers, exactly because the real cost to launch dropped so much.

That doesn't mean you're wrong, but you do seem to forget that SpaceX, as its own customer, knows the number of launches is going to rise exponentially. They obviously choose to manufacture for where the market _will be_, while you don't see the market before its there. Which is good for them.


There are enough Elon haters that you can rest assured there will be an inverse ETF so that you can easily hedge away your index exposure if you really want to.

It's kind of sad that we've become so risk averse. Risks should be fully disclosed but let the adventurers adventure.

Would Columbus' ship ever have been allowed to sail in the modern day? Proximity wingsuit flying and free-climbing is legal and people choose to do it even though the probability of death is extremely high. Spaceflight is significantly safer and far more beneficial to humanity, yet we block it. No one counts the lives lost due to slowing scientific progress but we should. How much further behind would we be scientifically if Darwin hadn't ventured out on the Beagle due to endless safety reviews. Would the US be what it is today if Lewis and Clark had to prove to congress that the trip was safe?

Given the opportunity, many of us would choose to die as part of a grand adventure in service to humanity vs. wither away of old age.


I wish I could downvote this comment more than once. It's incredibly ghoulish to use the perfectly-sensible argument that modern culture is too risk-averse to handwave away known critical safety problems. Those two things are completely orthogonal. Yes, astronauts should be willing to accept that there are "unknown unknowns" and that they will be facing some amount of unquantifiable risk, and they should be celebrated for this. That does not, not at all, mean that when a mission comes back with heat shield failures we know should not have happened, and multiple Inspector-General reports say the ship is not safe, those concerns should be blown off with rambling about Charles Darwin. That's pure insanity.

Or to put it another way, if you were the manager on the day of the Challenger launch issuing the "go" command over the objections of the Thiokol engineers saying it was unsafe to launch in below-freezing temperatures, would you have done so with paeans to Christopher Columbus? That's the sense I get from your post.


>Camarda is an outlier. The engineers at NASA believe it is safe. The astronauts believe it is safe. Former astronaut Danny Olivas was initially skeptical of the heat shield but came around.

How do you explain so many people believing it is safe?

The problem is risks are far too easy to brainstorm, anyone can come up with endless risks that it takes endless time to mitigate.

If I were the manager for challenger, I would have run the o-ring experiment as soon as it was brought up as a concern. Put the fuel pumps in a freezer, test if they leak. Feynman famously demonstrated it with a glass of icewater. Experiment is what separates made up risks from real risks, I would have definitely told the engineers to take a hike and would have hit launch if they couldn't provide experimental evidence of o-ring failure in cold temps. (Spoiler alert: in that case they easily could have)


No. That famous demonstration only touched on the real failure mode--the rings were covering up other failure and in the cold could not do so.

The real test was creating a full-scale test of ignition, an engine containing mostly inert filler (to occupy the fuel volume) and just enough fuel to reach stable burning.


> How do you explain so many people believing it is safe?

The article itself answers this question: institutional incentives leading to heavy social pressure to agree with the groupthink and declare something is safe when it is not. And we know that the scenario it lays out is highly possible, because it has already destroyed two Space Shuttles. Now that this has happened twice, the burden of proof is on the people saying it's not happening again, especially when the OIG's report directly contradicted what NASA had been saying about the heat shield up to that point (indicating they were lying and had to hastily retcon their story).


>the burden of proof is on the people saying it's not happening again

This specifically I take issue with. You had a bug in your software before so now the burden is on you to formally prove your software is bug-free.

The burden of proof should remain on the naysayers. Take a plasma torch to the heatshield pock marks and see how long it takes to burn through. Do experiments just as Feynman did with the o-rings. Let the outcome of the experiment, not office politics decide.


I'd say when two conditions are true:

1) you have an established pattern of behavior of ignoring safety concerns (Challenger, Columbia), and

2) people are alleging that you are doing the same thing now, with independent auditing from the OIG backing them up,

that's sufficient to shift the burden of proof back onto you.

Your attempt at a gotcha with the heatshield is just ridiculous: everyone already agrees the heatshield works in small-scale testing. That's the entire problem! It failed on the actual mission and NASA couldn't explain why, so instead they pivoted to trying to explain why the failures don't matter.

(EDIT: As an addendum, I'll also add that you don't even need to go back to Columbia to find an example of NASA lying about safety to protect reputations. Remember when they insisted for months that the Starliner mission was going just fine, and then eventually they said the astronauts weren't coming back on it, and then it landed and the final report was that there multiple failures leaving it on the knife edge of total catastrophe? And remember how that was less than two years ago? You're a maniac if you take the safety claims of this organization at face value)


Every Show HN should come with a disclosure detailing exactly how much compiler was used to create it.

I mean it's not that using compilers is bad, it's just that those who use them aren't real coders.


Nobody said anything about “real coders”.

But yes, people generally do not review and comment on compiled code. If your source is written by AI, why is it a surprise people might be hesitant to spend their time reviewing what it produced?


false equivalence

You had to change the end because following it through actually made total sense. You kinda pulled a trick, no doubt to convince yourself if I’m being fair to you.

It actually would have been better if I kept the end. You don't write your code, a compiler does.

You simply described what you wanted in more abstract far less specific language.

Before we were at least 2 compilation/translation steps removed from machine code, now we are 3.


I'm not sure I agree with “far less specific language”. I became interested in programming precisely because it's far more specific than human language, and yet this didn't lead to me preferring assembly code over high-level abstractions.

you’re dialectically better than this.

A lot of this is just wrong. AI can now see the output.

It's interesting that highly flawed opinion pieces like this are so popular.


No that is not the issue. Runway incursions have always been a problem and many deaths have occurred.

There have been many attempts to change phraseology, teach pilots and controllers to always readback runways, etc. but nothing that actually prevents the issue from occurring entirely via automation.


The incursion was by a fire engine which was hurrying to handle yet another incident. The weather was foggy, it was raining, and the incoming plane was already low, so it was pretty hard to tell it apart from many other lights shining from the fog in the distance. It's not easy to assess the speed of motion when a fuzzy ball of light is advancing right towards you.

The pilot was given the clearance to land before the fire engine was dispatched. Apparently there was not even enough time for the crew to max out the thrust and try to lift off the strip even if they managed to notice the lights of the incoming fire engine.


Planes use a system called TCAS to prevent collisions in the sky, this system is independent of ATC and works even if ATC is not paying attention or if pilots have the wrong frequency tuned. It detects impending collisions and gives both pilots clear and automated alerts plus an action IE climb + turn to execute immediately to prevent a collision.

A similar system can and should be used for runways.

As a thought experiment, imagine how many car accidents there would be if instead of traffic lights, each person had a AM radio in their car and police officers called out over the radio which cars should proceed across the intersection. That is the unfortunate state of modern? aviation.


TCAS disables below 1,000 feet because there’s too much stuff at an airport.

I have ADS-B in my airplane and can see everything on the ground on a pretty map as if it were literally a video game. I can see landing aircraft in realtime while holding short or crossing a runway. The emergency responder should have had it in their fire truck.

The technology already exists. The problem has already been solved with an iPad and a $200 receiver. Almost certainly some BS regulation or rule was at least partially responsible here.


Information overload is a thing, and there are a lot of ground vehicles at a place like LGA.

Consider that if you have access to all the local ADS-B data you can project paths forward through 3D space for the next, say, 30 seconds or so. Using GPS you can determine your own position in 3D space. At that point it's trivial (and I'm not handwaving here, it is literally extremely trivial) to filter projected paths based on passing close enough to your own in 3D space (ie accounting for altitude). Stick that on a tablet and require it to be present in all vehicles that operate on the tarmac.

It wouldn't need to work 100% of the time because you'd still be required to contact ATC. The only requirement is that it have a reasonably high chance of alerting drivers to potential mistakes before they happen.

Which is to say this incident was trivially preventable had anyone with authority over these sorts of things cared to bother.


> for the next, say, 30 seconds or so

But this is a hand-wave.

This is a situation where both vehicles got explicit permission from someone who's supposed to know what they're doing. These sorts of runway crossings aren't unusual - and this one was responding to an emergency - and at a place like LGA there's always gonna be a plane on approach.

The difference between "hold short at runway 22" and being on runway 22 is much less than 30 seconds in some cases.


A clearance from ATC means you can land not that you must land nor that it is safe to land. PIC still has the ultimate choice. It's common practice in the US to issue landing clearances even when another plane is on the runway or there are two landing planes ahead of it also with landing clearances, if that wasn't done you would be waiting far longer at the airport.

It's obviously the right choice to give the PIC the information via avionics in a graphically concise way that highlights this potential runway contention because it is real and pilots are expected to adjust their speed to maintain the right sequence.

When it isn't possible, which does happen, IE a plane ahead is slow to clear the runway or to takeoff, pilots are expected+required to execute a go-around.


If ATC says you're clear to cross the runway and then you glance down and the screen shows a plane projected to cross directly in front of you in 10 seconds you'd probably think twice, right? This hypothetical cheap appliance has GPS and a compass and probably even a camera feed facing forward. It isn't a difficult technical problem to calculate the time offset at which the object traveling along the crosscutting path will pass in front of you.

> The difference between "hold short at runway 22" and being on runway 22 is much less than 30 seconds in some cases.

What is the typical minimum temporal separation? I would have expected at least 45 or 60 seconds given the cost of a plane and the imminent threat to life.


So, it does appear I was correct here - it's a difficult thing to solve.

https://www.nbcnews.com/news/us-news/investigators-search-an...

> Jennifer Homendy, chairwoman of the National Transportation Safety Board, said at a news conference Tuesday that the airport uses a safety system called ASDE-X to track surface movements of aircraft and vehicles.

> "ASDE-X did not generate an alert due to the close proximity of vehicles merging and unmerging near the runway, resulting in the inability to create a track of high confidence,” Homendy read from an analysis of the system’s performance.


I think the most generous interpretation of using 'all' ADS-B data (including things on the ground) would be to have VR and have boxes for all objects, à la the F-35 helmet:

* https://www.radiantvisionsystems.com/blog/worlds-most-advanc...

Not sure if you can do something on a 'simple' HUD that many planes have, so you could see objects in your flight path.


The entire point is that there's no need for someone driving a ground vehicle to see all the ADS-B data. They only need to know if and when a plane is projected to cross the direction in which they're facing. It might also be useful to know the projected speed as well as how far in front of your vehicle it will pass (but you can presumably figure the latter out on your own because, y'know, the runway).

Well imagine if we designed a TCAS-like system that did work below 1,000 feet!

TCAS is mostly "how close to another plane am I?"

In flight, the answer really shouldn't ever be "less than 500 feet". During landing, the answer almost certainly will be "less than 500 feet"; a plane's queued up to enter the runway after you land, a ground vehicle is working on something near but not on the end of the runway, etc.

It's a surprisingly tough challenge to solve a) reliably and b) in a way that doesn't cause a whole bunch of false go-arounds wreaking havoc on the busy airport.


It's a much easier challenge when every moving vehicle in the airport environment is issued a relatively exact clearance.

The ATC clearance gives each vehicle a movement contract.

It would be great if avionics actually took those into account. Avionics should help pilots ensure their vehicles don't break the contract and should also alert immediately if other vehicles have, or their velocity vector is such that they will violate the contract. IE braking rate is insufficient to stop before the hold short line. ATC computers should ensure no conflicting clearances are issued.

None of that happens right now.


Apropos of anything else, if you are operating an emergency vehicle on the road in "emergency mode", liability defaults to you unless demonstrably otherwise. I get that this is not a road, but...

Almost every fire department in the country has SOPs for operating in emergency mode that generally include coming to a stop at all intersections or at least being able to affirmatively clear the intersection.

This personal liability is not particularly appealing in the world of fire, where ~70% of US firefighters are volunteer (not that the story is better in career), so codifying it in SOP allows departments/governments to negotiate insurance policies for their members, saying that "if you were driving in emergency mode, but within SOP, the department's insurance will cover your personal liability".

I saw the video. The incoming engine didn't appear to slow until too late, either.


Much of big tech became Product leaders running amok. Somehow It shifted from users know best to "Product" knows best.

I think this all stemmed everyone wanting to be Apple except no one actually achieved it and now we have 3 different versions of the audio control panel in Windows, the start button is somehow in the middle of the screen, and windows search no longer searches your PC.

Deleting "Product" might save windows, short of that, I am doubtful.


Apple achieved it with Mac OS X Snow Leopard. Apple then spent ~15 years un-achieving it. It started with iOS 7, and has culminated in the Liquid (Gl)ass era: a mess of unintuitive menus, terrible and inconsistent UI patterns, the lobotomite twins Siri & Apple Intelligence.

Although, surprisingly, built on top of absolutely incredible silicon.


> Although, surprisingly, built on top of absolutely incredible silicon.

To me that's because thats a capital E "Engineering" driven task that Product can't get their grubby little mitts on and ruin.


[flagged]


Pretty cool being racist. I noticed people from varying ethnic backgrounds seemed to land in particular divisions (maybe schools in those countries focused on these cores), but I wouldn't ascribe nationality to anything as broadly as you did.


Minority in Apple R&D is mostly Asian, not Indian


I don't particularly care what their ethnicity happens to be. Just write good, bug free code that does things people want. How they get to there from here? No f'ing idea -- but I know that first they have to have to want to and they _clearly_ do not.


  > Just write good, bug free code that does things people want
in big tech, this is rarely caused by individuals and more by management, just fyi

It's a fair point. The Indian take over is more relevant in Microsoft.

It has posix shell, all is forgiven, can't complain about UI patterns that I never interact with.


Exactly.

Who could imagine Apple would eventually inherit Sun’s crown as the king of the RISC unix workstation?


No one, given how A/UX went down.

It was a mix of not buying Be, having a reverse acquisition with NeXT, Jobs taking over the reigns yet again, Sun doing a bunch of bad decisions.

Nowadays, following the spirit of old Apple, they only care the UNIX underpinnings as good enough, and that's about it.


Still, they managed to bring Unix to the masses. On a RISC platform even.

And their Unix is just fine.


UNIX had already won the server room by them with RISC.

Lets not pretend outside IIS with ASP, later ASP.NET, Active Directory, Sharepoint, SQL Server, SMB, there were any other deployment scenarios left for Windows.


I have to confess I've never seen a modern .NET stack deployment on anything other than Linux.

I still regret that I did not get a ModBook running Snow Leopard.

Snow Leopard was riddled with bugs. Take a look at all the updates it had following its release to confirm that.

If that's true, consider what it means for modern macOS quality when people look back on SL fondly as the most stable release.

I think it means they are wearing rose tinted glasses.

Exactly. SL was not as polished as people remember it to be. They’re just delusional with nostalgia.

The noteworthy thing about Snow Leopard is that it was released at all. Apple had shifted all their effort onto getting the iPhone released and development on OS X pretty much stopped.

I thought they delayed Leopard, not Snow Leopard...

I have become ~comfortably numb~ delusionally nostalgic.

Was it? I remember it being a pretty solid experience.

> ... the start button is somehow in the middle of the screen ...

If you take a look at the size of widescreen monitors, you can kinda guess why someone decided to move the start button/menu to the middle of the screen.

I know Samsung and Dell have ginmorous 49 inch monitors. Start menu that pops up from the lower left corner of the monitor would be a bad UX - the user might not even notice that a menu had popped up if that lower left corner of the monitor is out of their peripheral vision.

Moving the Start menu to the middle of the screen does go against years of muscle memory... moving your mouse/trackpad to the lower left, using the monitor border as a stop-zone though.

I guess they didn't want to make it an option/toggle hidden in some dialog box somewhere...


You used to be able to move the taskbar to any side of the screen. IMO the more sensible move for widescreen monitors is to move it to the side so it takes up less screen real estate. Windows 11 removed the ability to move the taskbar like that; it's stuck on the bottom (unless you seek out 3rd party software solutions).

Also it should be noted that (at least as recently as September, haven't used 11 since) you could move the start button back to the left side.


The article mentions they're (re?)adding in the ability to put it on any edge of the screen

Which is absolutely a good thing, but my point is that they removed a feature when it had only become more relevant with time. They get no credit for the change to move the start button to the middle, which is admittedly defensible if the goal was to accommodate widescreen displays, when they removed the ability to move the taskbar entirely, which had been in windows for 25+ years, and also had that benefit.

It's inconceivable they ever removed it. Why would they do that? It's been a feature for a long time and a lot of people use it.

If you're going to introduce a new thing, you have to make sure it justifies replacing the old thing. The new windows 11 taskbar was essentially a straight downgrade.


> We are introducing the ability to reposition it to the top or sides of your screen...

Using the word introducing is so disingenuous to me considering how long that was a capability.


I have a 48" 4k non-curved monitor, running stock KDE with the launcher in the corner and UI scale set to 100%. Not only is the experience just fine, I simply cannot see having the launcher in the middle being useful. It would lead to a beak in left-right organizational thinking for where window and pinned tasks live as my active applications change and I have to hunt for their new screen position. Alt-tab breaks down after a certain number of windows, as does the exploding "overview". Having a consistent order and positioning for multitasking is both faster and less cognitive load.

the user would most definitely notice a menu popped up. Because they clicked it themselves

Tapping the Windows key on the keyboard can also bring up the Start menu (or another key combo like WindowsKey+X brings up the Power User/WinX menu ).

With the Windows Key next to the Alt key on many keyboards, the user could press the Windows Key accidentally when they wanted to press the Alt key.


I've never been bothered by Windows's changes, and I mostly think they were reasonable. But for a number of reasons it's never going to be easy for them to gain total acceptance: 1) the massive backwards compatibility back to Windows 95 stuff, 2) the willingness to try new and/or silly things that Apple is too stuffy to try, and 3) the fact that there's only ever going to be one "flavor" of Windows; if we were stuck with one single Linux distro people would be complaining about that one too.


There are two major problems with modern Windows.

The first is coercion. Installing without a Microsoft (Outlook) account is more and more difficult. An attentive steward of Windows would allow older gui themes (xp, Win7 Aero, etc.) to be applied for the nostalgic. And there would be an easy control to disable all Copilot integration. Microsoft is coercive towards their customers with these and other actions.

The second is incompetence. The Windows update process is intrusive, lengthy, and prone to repeatedly bricking unlucky PCs. Linux updates are far more pleasant.

These are big problems, and I agree, it will take great institutional change to curb these abusive tendencies. I don't know if they can.


>incompetence

Man..., its 2026 and just yesterday I did "Update and Shutdown" only for it to "Update and Restart" instead. It would be funny if it wasnt that sad..


Most updates need to reboot once or more, but the final one should have shutdown.

Now, don't get me wrong, what the hell is so special about Windows that it needs to reboot for every little update operation?


It doesn't, I have installed many Windows updates that didn't require a reboot. Even ones I expected to need an update, like an update to a graphics driver. Screen just went blank, then came back a second later.

AFAICT it's only updates to things that run at startup time that require a reboot, probably because NTFS doesn't allow you to write to a file that's currently opened (as opposed to nearly every Linux filesystem, which handles that just fine: the process that has the file opened continues to see the "old" file, while any that open it after the write will see the "new" file — but NTFS, probably due to internal architecture, can't handle that and so you have to reboot to change files that background services are using).


It has nothing to so with NTFS, but all with the Win32 API. The Windows kernel supports this file model, proven by WSL1. There is a blog post somewhere (Old New Thing?) stating the engineers would like to e.g. allow deleting a file even if there is still a program with with a file handle to it, but are concerned deviation from current behavior would cause more problems than it solves.

The reason that they want a reboot is that they do not want to support a system using two versions of the same library at the same time, let's say ntdll. So they would have to close any program using that library before programs that use the new version can be started. That is equivalent to a reboot.

And I completely understand the reason. For a long time when Firefox would update on Linux, the browser windows still open were broken; it opened resources meant for the updated Firefox with the processes runnung the non-updated Firefox. The Chrome developers mentioned [2] that the "proper" solution would be to open every file at start and pass that file descriptor to the subprocesses so all of them are using the same version of the file. Needless to say, resource usage would go up.

[2]: https://neugierig.org/software/chromium/notes/2011/08/zygote...


Thanks for the correction. Not having had to write anything against the Win32 API, I learned something from your comment. Appreciate the info.

> The Chrome developers mentioned [2] that the "proper" solution

Or to install into versioned prefixes, so the old keeps using the old files.


This isn't an NTFS thing. The I/O Manager implements NtLockFile. Applications can request exclusive byte-range write access to a file. And perhaps it is lazy programmers, or defaults, but they generally do.

I don't think Microsoft sees client machine reboots as an issue, and it used to be much worse when they used to be released weekly. On the server side, Microsoft expects that you'd implement some form of high availability.

NTFS on non-Windows follows the locking semantics of the underlying driver model/kernel, e.g. you can replace an in-use file on Linux. Likewise, using FAT on Windows you cannot replace an in-use file. This is just to demonstrate it isn't a file system-specific "issue" (if you feel it is one). It was a design decision by the original NT OS/2 development group.

Ultimately, the NT byte-range locking is a holdover from NT OS/2, where in OS/2 byte-range locking was mandatory.


That's a myth that Linux handles it better.

There a enough apps that keep old files open, but also (re)open updated files that do not fit to the old, open ones, thus have all kind of issues. (Subjectively Thunderbird has major issues with not restarting if libs it depends on get upgraded.)

I stopped answering support mails and tickets from users with long uptime with anything else than: reboot first. And it was >>80% the cause of problems. And yes, most times a logout would suffice, but with our users having >100d uptime with desktops and laptops, the occasional kernel update is done /en passant/ this way. (The impatient could kexec and have the advantage of both. Or look at the output of "need restart" or "checkrestart". But I couldn't care less in case of end user devices)


Can‘t replace files that are in-use and that includes running programs or loaded DLLs. Linux can, it keeps the inode and only actually deletes upon termination of last access.

Ive read this many times, so I tried this a few times, giving it the benefit of the doubt, only to find the PC on login screen the following morning every time.

Ugh, I've had this happen over and over. I can't trust my laptop to actually shut down. I have to wait to see the light stay off for a couple seconds before I put it in my bag.

The fact that Microsoft are doing zero nostalgia marketing is baffling to me.

Put a clippy skin on copilot and people would probably install it voluntarily.


If you have two candidate ui designs you pick the best of the two. If you have an established ui and a candidate the new design needs to be dramatically better. It has to scream superiority. If it isn't that you are just ruining ux.

I install Gimp one time. I like to casual draw on autopilot, usually while doing something else, talking, watching a movie, listening to a podcast etc. For some reason half the icons were missing and the existing set was replaced with the hipster horrifying flat single color monstrosities. This would have been a waste of their time if it was only an option for no one who wants this some place buried deep in the settings where it would only clutter the nesaserily complex options.

With MS it feels more like intentionally trolling the user

The best spot for the applications sub menu is to not make it a sub menu. The second best is to leave it wherever the fuck it was before. I want to struggle remembering what an application was called and wonder why they are organized so poorly. (Not by file Association) In stead they have me wonder where they even are???


I'm actually not sure what you're saying about GIMP. I mean - I understand the frustration, the "button groups" or whatever they did to declutter things made things (imo) worse; I don't think it's a good default.

BUT

I don't actually understand your sentences for the most part. I really had to work to glean what you were talking about.

I'm not trying to be insulting here; sometimes I write in inscrutable ways too. But - could you reword a few things so I know what you're trying to say?


I've never been sentenced to repeating myself. I'm sure people normally hope for improvements in silence without informing me. Thanks!

The general point was that "Improvement" that ruin muscle memory usually aren't. It should be the most basic UI design principle.

One should be able to instinctively click on the Gmail icon while focused on the task at hand. If the icon isn't where one expects it to be you are no longer doing email things. Same goes for having the user search for the inbox inside the application. If they can't find it they are unproductive and feel dumb but they aren't to blame. Some bad designer came up with the brilliant idea to call it "all mail". The inbox is expected to live at the top of the menu. You can't improve it.

It's such basic stuff. It's like someone used your tools or your kitchen and put everything in a new spot. Eh, I mean the wrong spot.

I could give 1000 example inside windows but it seems everyone is trolling their users. They all want to create the new and improved slashdot, now without threaded discussions! - Hurray!


Here’s a great example of UX that needed changing despite breaking muscle memory:

https://m.youtube.com/watch?v=QYM3TWf_G38&vl=es-US


Very impressive effort. He mentions so many cool ideas. I really like the drop-down with what you want on the toolbar. Good way to unclutter the settings menu.

I wonder what would be a good way to visualize settings the user changed (and changed back) and some way to see the defaults. Perhaps save custom settings? Useful but it adds even more cans of worms.

Say, after I change the minimum font size on my browser, is it still usable for webdesign?

What if I want to configure Audacity for podcasts and for music?

I wondered if one could ask the user when they started using an application but it seems unworkable.

Then had a silly idea to do a slider that moves the ui in time with animations so that you can see buttons fly in and out of sub menus. Slide it a few decades to the left and you are back in windows NT. Not a realistic thing for MS to make but depending on the project it might be cool.

Then had an idea for a tree shaped slider with all the ui branches in it.

For websites I often keep the old designs on the server and append a date to the file name. Never had a reason to expose the user to those but it could be fun. I did lots of crazy experiments that didn't live up to expectation.

Can in stead throw designs in there that appeal to single digit % users. And then they won't be able to find it.

Maybe some day AI can save us.


> This would have been a waste of their time if it was only an option for no one who wants this some place buried deep in the settings where it would only clutter the nesaserily complex options.

I'm not sure what this sentence means. Perhaps you already knew that Gimp's monochrome icons can be replaced by colorful ones by going to the Gimp settings under Theme -> Icon Theme, and unchecking the "Use symbolic icons if available" checkbox. That may be what you meant by "some place buried deep in the settings". But if you didn't, at least now you know how to get the colorful icons back.

The reason I'm making this comment, though, is to contrast it with Windows. A comment by chasil, left shortly after your own comment, said that "[a]n attentive steward of Windows would allow older gui themes (xp, Win7 Aero, etc.) to be applied for the nostalgic." Gimp has done just that: in Icon Theme, you can choose the "Default" or "Legacy" icon theme, so if you got used to the older icons, you can get them back. And you can still use the newer icon set if you like, but get the icons' colors back by unchecking a (confusingly-named, the name definitely needs improvement) checkbox. Windows doesn't have any built-in way to get the older themes back; if you want Windows 11, or even 10, to look like Windows 7 or XP or whatever version you trained your visual memory on for years, then it takes third-party software to make that possible. (And it may not even be possible, I haven't checked).

When even one of the most infamous-for-confusing-UI pieces of open-source software (I mean Gimp, of course) is doing a better job of providing good UI than Microsoft is, Microsoft has a problem.


I'm happy the old icons are still available. I consider the flat icons (by default) a bad idea because one can't use them with peripheral vision. Even after getting used to them i have to look much longer to see which does what.

There's another pressure: each major release has to look different from the last one, otherwise it feels like a minor release. In this regard XP, Vista and 7 were successful. 8 also succeeded here, but at the expense of usability.

It doesn't have to use different window layouts, just differently themed decorations. Changing the default wallpaper is a simple way to do it.


I would say the primary reason that windows still is acceptable is familiarity and games. Nothing else.

Non tech people don't care about control panel etc. they just go through the pain of entering the WiFi password. Done.

- gamers. Double click install - go on. I know very few gamers that have moved to Linux.

And corporate. Most normies that I know DON'T have own computers. Everything can be done via smartphone these days.


With games it's performance. I have a graphics card, I'm uninterested in losing %s off it for running on Linux.

It's doomsday if Linux starts outperforming Windows. If SteamOS for PC still required me to dual boot - which I already do - but guaranteed is get 100% windows performance or better, then that would be the official end.

It's not clear to me this couldn't happen either: I am very willing to hand over the entire PC configuration if the promise I get in return is "your games will run as fast as it is possible to run them".


Depending on which game, and which month it is measured in, Linux and Windows have been on par or trading blows for performance. Last I saw the performance had swung back slightly in favour of Windows though (seemed they started fixing some of the issues they had).

When you think about it, it is kind of insane that Linux can match or outperform windows when it has an extra layer translating the system calls though. And for many of us, who don't play competitive twitchy shooters on a high level, the performance of gaming on Linux is perfectly adequate currently. I played Baldur's Gate 3 on Linux earlier this year for example, and it maxed out the frame rate of my monitor.


I'm not sure it does have an extra layer. Reading through the design, it's quite possible the number of layers is the same or less. It might translate win32 calls to Linux libraries and system calls, but on Windows pretty much the same thing is happening, win32 -> lower level libraries and system calls.

I haven’t had a Windows box in about 8 years, but even back then all the big names had consistently better performance on Linux.

Usually about a 10-20% fps improvement for my usual fare in those days: League, Overwatch, Civ5, Minecraft, Crusader Kings, Factorio, etc. Try it for yourself and see what you get.


I'm very sus on Linux not receiving regular driver updates. I just last month solved an ongoing issue I had with many games, no uniform reason, but widespread force closures w/o error messages and some of the games with the most significant issues - could run on a potato. I dont have a potato. It was frustrating - and it was a random driver, that I stopped from being updated and never fixed. In a few years, when I am thinking about upgrading - I might consider linux. I really do expect more users and more stuff made for that platform in the near future. Drivers are high priority tho bc they directly transfer into functionality also.

What do you mean? If you install a distro with a fast release cycle, you'll get constant driver updates!

IIRC from some discord threads, some games already perform better on Linux than on Windows. We are getting there. The only moat left is kernel anti cheat for games like Battlefield. I’m just fine if those stay on windows actually.

Windows compatibility is pretty overrated at this point. There are a bevy of programs we use commercially that are quite old that just don’t work on 11, and not well on 10. Compatibility mode only gets you do far.


> 1) the massive backwards compatibility

Greatest strength. Greatest papercut.


Golden handcuffs

> Somehow It shifted from users know best to "Product" knows best.

In a world where consumers have less and less power, products are designed to please CEOs.

Money is power, as inequality grows and concentrates the average user/worker/citizen has less power and their voices matter less. Today's Internet is designed for the needs of big corporations, users are there just as another product to be sold.


At this point Apple isn't even Apple. Product ate the world. I don't remember the last time someone came to me with a customer problem to solve. It's all warring fiefdoms.


Perhaps AI is taking off because it is the only thing actually listening to customer problems.


Great point. Just last week I used AI to build a minimal replacement for a SaaS tool I’ve used in the past that has obnoxious feature gating/price tiers. My version isn’t nearly a complete replica, but it has the base functionality I want without having to feel like someone spent hundreds of hours perfecting price tiers with artificial limitations that annoy me just enough to upgrade.

Getting a tool that did exactly what I wanted with no fuss was delightful.


Monkey's paw curls: listening to customers, except literally and 24/7.


Best insight I’ve seen today, thanks for this!


Someone called it a number of years ago once each kind of brand new apple device couldn't plug into each other without a dongle.


It's like...like a game...of thrones...


> windows search no longer searches your PC

Absolutely baffling, when the perfect, magical, instant, high performance search tool has existed for a decade at least: "Everything"

One of THE BEST windows apps.


If you like “Everything”, you might like https://filepilot.tech/ - a 2MB, no install, Explorer clone designed to be quick and including a similar fast search.

As an ex winforms dev, I haven‘t touched windows since I got an M1 max.

Microsoft has always known better than their users, they practically invented this attitude. Others then copied it.


People having Apple in high regard should also learn about its history, and the almost bankruptcy that didn't kill the company out of sheer luck.

> and now we have 3 different versions of the audio control panel in Windows

And yet somehow none of them are as nice as https://eartrumpet.app/ lol


Even this cannot adjust volume levels independently for multiple tabs in the same browser, which I have always been able to do on linux with pulseaudio/pipewire. People on windows use browser extensions for this, with full access to all tabs/sites...


Every time I try to build a castle in my swamp, it gets to a certain height and then it just sinks?

STOP telling me about civil engineering, we fucking invented that shit. And NO, we have to build it in the swamp, it feeds us and keeps us safe, and I'm darned proud to say we invented that too.


Thanks, I actually didn't realize that my basically stock Linux install already did this

What makes that nicer than the built in volume mixer?


Per-app mixing on the first-level menu. I like SoundSource on macOS for the same reason: https://rogueamoeba.com/soundsource/


I right click the volume icon in Windows, select "Volume Mixer", and it gives me per-app mixing. Which I guess is an extra click, as with eartrumpet you can access the mixer with a single left click on the icon.

had to stop using eartrumpet cos it kept randomly pulling the cpu to near 100%. updating didnt help


> I think this all stemmed everyone wanting to be Apple except no one actually achieved it

Given the repeating pattern of Apple shipping a hated operating system update in recent year, it feels like it's more “everybody wants to be Steve Jobs and no one actually achieves it including Apple”.


This will be a controversial opinion but I think some escalation by police is warranted.

The reality is there are aggressive people in society that have a tendency to escalate things. If police are trained to only de-escalate, it removes a powerful check on aggressive escalation.

The second order effect is an increase in events like people being pushed onto train tracks, glass bottles being thrown if you glance the wrong direction, etc.

I think optimally you have a police force that is trained in de-escalation but also escalates things slightly more than the average citizen and thereby provides a service to society as a buffer.


I don't think you understand what "de-escalation" is. It's not ignoring antisocial behavior or failing to confront people.


I'm not claiming that de-escalation implies ignoring antisocial behavior or failing to confront.

I'm claiming that some antisocial behavior is only triggered by some degree of escalation.


I wonder what you base this on. What would statistics backing this up look like?


It's a good question.

It would be violent crime trend correlation with de-escalation training. Or even with complaints of police aggression.

Even more useful would be to separate out assault + battery where the victim is random vs. non-random, IE domestic or gang.


I had a look around and there is very little actual ecidence for detrimental effects. Most things seems to be exaggerations by politicians who want to be tough on crime.

Amazing, never thought it would happen.

Ridiculous to have laws that unfairly protect dead industries. Dockworkers next please so we can have automated container unloading.


what do you mean here? are dock workers legally protected from automarion?


Yes, some directly via the LHWCA fed law and some indirect via labor union contracts with port associations that rent from the gov port authorities. Ultimately it's such a powerful union that often US presidents take part in the negotiations.

The recently negotiated (nation-wide) deal:

In the deal, the union holds on to existing contract language that protects against certain types of automation, and has won guaranteed jobs where partial automation is put in place.

Port employers will still be blocked from implementing “fully automated” port technology: the employers cannot implement equipment that is “devoid of human interaction.” And the union and the employers have to agree on implementing any new technology; if they cannot agree, the question gets sent to arbitration.

This language prevents East and Gulf Coast port employers from implementing the more extreme forms of automation seen in other parts of the world, including the Long Beach Container Terminal, in Southern California, where autonomous trucks and cranes entirely replace human operators.


These ports could buy the union's vote if it was important to them by giving existing workers some equity in the system that is intended to replace them.


What’s to stop an enterprising, well funded startup from opening a fully automated port?


The Jones Act


The ILWU controls labor at all west coast ports, including LALB, which is responsible for a majority of consumer imports from the Pacific. It has bargained effectively to block developing container handling automation systems.


“Bargained effectively”? Union boss Harold Daggett extorted the US public by threatening to “cripple the US economy’ if his demands were not met.


I am trying to use more neutral language when I comment, so that the underlying assertion of facts are more likely to resonate with someone who may disagree with me.

I agree with your characterization, but I just wanted the parent comment to look up the ILWU, where they would probably see some of those facts for themselves and be more likely to understand my position.

As the human internet dies, I feel like it's more important for those of us that want some of it to survive to participate constructively.


Fair enough.


That is the leverage unions have. Do you expect them not to use it? The union isn't there to protect the economy, it's to protect its members.


Please provide me examples where the CEO of a major company threatened to cripple the US economy if his/her demands were not met.


As opposed to businesses which never use their leverage for the busnisses benefit over the nation as a whole?


I don’t know the legal protections around automation, but the unions virulently fight efforts to automate the industry.

You should listen to Harold Deggett, the dockworkers union boss who threatened to cripple the US economy if automation was introduced:

https://ijr.com/union-boss-willing-to-cripple-america-made-m...

He also lives in a 70,000 sqft mansion:

https://nypost.com/2024/10/02/business/harold-daggetts-spraw...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: