Hacker Newsnew | past | comments | ask | show | jobs | submit | zzzoom's commentslogin

At some point SBCs that require a custom linux image will become unacceptable, right?

Right?


Using vendor kernels is standard in embedded development. Upstreaming takes a long time so even among well-supported boards you either have to wait many years for everything to get upstreamed or find a board where the upstreamed kernel supports enough peripherals that you're not missing anything you need.

I think it's a good thing that people are realizing that these SBCs are better used as development tools for people who understand embedded dev instead of as general purpose PCs. For years now you can find comments under every Raspberry Pi or other SBC thread informing everyone that a mini PC is a better idea for general purpose compute unless you really need something an SBC offers, like specific interfaces or low power.


Somehow, this isn't a problem in the desktop space, even though new hardware regularly gets introduced there too which require new drivers.

x86 hardware has a standard way to boot and bring up the hardware, usually to at least a minimum level of functionality.

ARM devices aren't even really similar to one another. As a weird example, the Raspberry Pi boots from the GPU, which brings up the rest of the hardware.


It's not just about booting though. We solve this with hardware-specific devicetrees, which is less nice in a way than runtime discovery through PCI/ACPI/UEFI/etc, but it works. But we're not just talking about needing a hardware-specific devicetree; we're talking about needing hardware-specific vendor kernels. That's not due to the lack of boot standardization and runtime discovery.

Please forgive this naive question from someone with zero knowledge in the area: What's stopping ARM/RISCV-based stuff from using ACPI/UEFI?

Nothing, and there has been a push for more standardization including adopting UEFI in the ARM server space. It's just not popular in the embedded space. You'd have to ask Qualcomm or Rockchip about why.

So we can hope for a future where cheap ARM/RISC-V SBCs are as pleasant to use as any bog standard x86?

You can hope but I don't think it'll happen any time soon.

The lack of standardized boot and runtime discovery isn't such a big issue; u-boot deals with the former and devicetrees deal with the latter, we could already have an ecosystem where you download a bog standard Ubuntu ARM image plus a bootloader and devicetree for your SBC and install them. It wouldn't be quite as elegant as in x86 but it wouldn't be that far off; you wouldn't have to use SBC-specific distros, you could get your packages and kernels straight from Canonical (or Debian or whatever).

The reason we don't have that today is that drivers for important hardware just isn't upstream. It remains locked away in Qualcomm's and Rockchip's kernel forks for years. Last I checked, you still couldn't get HDMI working for the popular RK3588 SoC for example with upstream Linux because the HDMI PHY driver was missing, even though the 3588 had been out for many years and the PHY driver had been available under the GPL for years in Rockchip's fork of Linux.

Even if we added UEFI and ACPI today, Canonical couldn't ship a kernel with support for all SBCs. They'd have to ship SBC-specific kernels to get the right drivers.


Right. Thanks!

The "somehow" is Microsoft, who defines what the hardware architecture of what a x86-64 desktop/laptop/server is and builds the compatibility test suite (Windows HLK) to verify conformance. Open source operating systems rely on Microsoft's standardization.

Microsoft's standardization got AMD and Intel to write upstream Linux GPU drivers? Microsoft got Intel to maintain upstream xHCI Linux drivers? Microsoft got people to maintain upstream Linux drivers for touchpads, display controllers, keyboards, etc?

I doubt this. Microsoft played a role in standardizing UEFI/ACPI/PCI which allows for a standardized boot process and runtime discovery, letting you have one system image which can discover everything it needs during and after boot. In the non-server ARM world, we need devicetree and u-boot boot scripts in lieu of those standards. But this does not explain why we need vendor kernels.


I think they're related. You can't have a custom kernel if you can't rebuild the device tree. You can't rebuild blobs.

> You can't have a custom kernel if you can't rebuild the device tree.

What is this supposed to mean? There is no device tree to rebuild on x86 platforms yet you can have a custom kernel on x86 platforms. You sometimes need to use kernel forks there too to work with really weird hardware without upstream drivers, there's nothing different about Linux's driver model on x86. It's just that in the x86 world, for the vast, vast majority of situations, pre-built distro kernels built from upstream kernel releases has all the necessary drivers.


It's legacy of IBM PC compatible standard, that has multiple vendors building computers, peripherals that work with each other. Microsoft tried their EEE approach with ACPI that made suspend flaky in linux in early years.

This does not explain why the drivers for all the hardware is upstreamed almost immediately in the x86 world but remains locked away in vendor trees for years or forever in the ARM world. Vendor kernels don't exist due to the lack of standardized boot and runtime discovery.

I have always found it perplexing. Why is that required?

Is it the lack of drivers in upstream? Is it something to do with how ARM devices seemingly can't install Linux the same way x86 machines can (something something device tree)?


Yeah lack of peripheral drivers upstream for all the little things on the board, plus (AIUI) ARM doesn't have the same self-describing hardware discovery mechanisms that x86 computers have. Basically, standardisation. They're closer to MCUs in that way, is how I found it (though my knowledge is way out of date now, been years since I was doing embedded)

I've just been doing some reading. The driver situation in Linux is a bit dire.

On the one hand there is no stable driver ABI because that would restrict the ability for Linux to optimize.

On the other hand vendors (like Orange Pi, Samsung, Qualcomm, etc etc) end up maintaining long running and often outdated custom forks of Linux in an effort to hide their driver sources.

Seems..... broken


What's the feasibility these days of using AI assistanted software maintenance for drivers? Does this somewhat bridge the unsupported gap by doing it yourself or is this not really a valid approach?

I've found AI tools to be pretty awful for low level work. So much of it requires making small changes to poorly documented registers. AI is very good at confidently hallucinating what register value you should use, and often is wrong. There's often such a big develop -> test cycle in embedded, and AI really only solves a very small part of it.

That's just the new normal. Everyone is doing AI assisted work, but that doesn't mean the work goes away.

Someone still has to put in meaningful effort to get the AI to do it and ship it.


Or you can just upstream what you need yourself.

There are some projects to port UEFI to boards like Orange Pi and Raspberry Pi. You can install a normal OS once you have flashed that.

https://github.com/tianocore/edk2-platforms/tree/master/Plat...

https://github.com/edk2-porting/edk2-rk3588


There also seems to be a plan to add uefi support to u-boot[1]. Many of these kinds of boards have u-boot implementations, so could then boot uefi kernel.

However many of these ARM chips have their own sub-architecture in the Linux source tree, I'm not sure that it's possible today to build a single image with them all built in and choose the subarchitecture at runtime. Theoretically it could be done, of course, but who has the incentive to do that work?

(I seem to remember Linus complaining about this situation to the Arm maintainer, maybe 10-20 years ago)

[1] https://docs.u-boot.org/en/v2021.04/uefi/uefi.html


Per TFA, the Orange Pi 6 Plus ships with UEFI, but the SoC requires a vendor specific kernel.

The orange pi 6 plus supports UEFI from the get go.

> At some point SBCs that require a custom linux image will become unacceptable, right?

The flash images contain information used by the bios to configure and bring up the device. It's more than just a filesystem. Just because it's not the standard consoomer "bios menu" you're used to doesn't mean it's wrong. It's just different.

These boards are based off of solutions not generally made available to the public. As a result, they require a small amount of technical knowledge beyond what operating a consumer PC might require.

So, packaging a standard arm linux install into a "custom" image is perfectly fine, to be honest.


If the image contains information required to bring up the device, why isn't that data shipped in firmware?

> If the image contains information required to bring up the device, why isn't that data shipped in firmware?

the firmware is usually an extremely minimal set of boot routines loaded on the SOC package itself. to save space and cost, their goal is to jump to an external program.

so, many reasons

- firmware is less modular, meaning you cant ship hardware variants without also shipping firmware updates (the boot blob contains the device tree). also raises cost (see next)

- requires flash, which adds to BOM. intended designs of these ultra low cost SOCs would simply ship a single emmc (which the SD card replaces)

- no guaranteed input device for interactive setup. they'd have to make ui variants, including for weird embedded devices (such as a transit kiosk). and who is that for? a technician who would just reimage the device anyways?

- firmware updates in the field add more complexity. these are often low service or automatic service devices

anyways if you're shipping a highly margin sensitive, mass market device (such as a set top box, which a lot of these chipsets were designed for), the product is not only the SOC but the board reference design. when you buy a pi-style product, you're usually missing out on a huge amount of normally-included ecosystem.

that means that you can get a SBC for cheap using mass produced merchant silicon, but the consumer experience is sub-par. after all, this wasn't designed for your use case :)


In some cases the built-in firmware is very barebones, just enough to get U-boot to load up and do the rest of the job.

“Custom”? No.

Proprietary and closed? One can hope.


I think even custom is unacceptable. It’s too much of a pain being limited in your distro choice because you are limited to specific builds. On x86 you can run anything.

It's predatory pricing.

If any of those addresses sold a single sat the price would crash hard.

I would bet heavily against that.

Someone selling single Satoshis from Satoshi's stash would herald the second coming of Satoshi. Can you imagine the hype?


Hype would not cause a price increase in this case. The value of bitcoin is partly due to scarcity. 1.1m previously inaccessible bitcoins suddenly becoming liquid would cause a drop in price, even if they were sold slowly. It could also cause panic selling as it might indicate the wallets have been brute force cracked.

> The value of bitcoin is partly due to scarcity.

Partially due to scarcity, but also due to hype.

As a weaker point: I would expect an increase in the market capitalisation of the bitcoin float. Ie if you multiply the price of bitcoin by the amount of movable bitcoin right now and after the first Satoshi is sold, you compare with the new price of bitcoin multiplied by the newly enlarged amount of movable bitcoin.

The strong claim is that the price per bitcoin would go up, too. Not just the market cap of the float.

> It could also cause panic selling as it might indicate the wallets have been brute force cracked.

Suppose I brute force cracked it to get access to the bitcoins. I would:

Quietly amass a large offsetting position in the bitcoin futures market (and wherever else you can do this), before I make any moves. Then (assuming I couldn't hedge my whole exposure at decent prices) I would use all means available to pretend that Satoshi had woken up again. Eg use specially fine-tuned LLMs to mimic his style to post on the usual mailing list etc. Some people will believe you, some won't.

I'd say post a bit in Satoshi's name to build interest. Then skeptics will say: prove it. And you 'prove' it by selling moving a few Satoshis between your own wallets back and forth. (Don't sell anything yet.) The hype will build, and you sell into it on the futures market.

The last step is important, because you can get rid of your bitcoin exposure this way, without any trace on the blockchain. So you can even vow to never release any of the stash on the market and other shenanigans. That should help the price.

Well, the futures will come due eventually, and then you can move the stash. The price might or might not crash, but you don't care, because you already locked in your profits on the derivatives.


> Partially due to scarcity, but also due to hype.

I agree with you, but isn't the value of gold also almost entirely due to hype? Sure, there are some industrial applications, but those are minor components of demand for gold.

Hype is just another way of saying "people obtain pleasure from owning this thing" and that's pretty much what sets demand for most goods. Don't even get me started on diamonds. The whole wedding ring having a diamond is hype.

Bitcoin just makes this explicit and impossible to deny.


Well, there's pleasure from directly owning the thing: you can look at gold in your vault and appreciate it for itself.

But a bitcoin in your vault by itself is indistinguishable from a shitcoin I just made by forking bitcoin with the same code but a new genesis block. Or even more pointed: the alternative futures of bitcoins after any route not taken by the community after any hard fork.

In any case, I agree that much of the value of gold comes from social conventions, too, yes.


Biggest pump and dump in history

That's a genius idea, keep adding broken stuff into the standard until there's no choice but to break compatibility to fix it.


No no no, you add new stuff that will totally fix those problems!


Imagine considering some bird poop staining the paint dangerous instead of the air pollution that's slowly killing you.


[flagged]


  If gasoline engines burned their fuel as efficiently as possible, they would produce three by-products: water vapor (H2O), carbon dioxide (CO2) and nitrogen (N2). 

  Unfortunately, engines do not run perfectly, and as a result, they also produce three by-products commonly referred to as the "terrible trio" of automotive pollutants. This trio includes the following:

  *  Carbon monoxide (CO) – An odorless, tasteless, poisonous gas, carbon monoxide can cause a variety of health problems and even death. Many urban areas experience critically high levels of carbon monoxide, especially during the cold winter months when engines take longer to warm up and run cleanly

  *  Unburned hydrocarbons (HC) – Responsible for causing a variety of respiratory problems, unburned hydrocarbons can also cause crop damage and promote the formation of smog

  *  Oxides of nitrogen (NOX) – Like unburned hydrocarbons, oxides of nitrogen cause respiratory problems and promote the formation of smog
* https://www.walkerexhaust.com/support/exhaust-101/exhaust-ga...


Take a nice big sniff. CO2 and water are odorless.


Have you ever seen an inversion? It’s crazy to imagine anyone who has, to end up thinking “maybe that shit-brown cloud stuck over the city is fine”.


Currently in Korea where the AQI is close to 200. Can confirm.

Granted most of that is probably coal power plants and stuff but... All the more reason for more solar.


I can't even play the decent free games I got because I can't find them in the UI. It doesn't have sort by rating (or any other popularity metric) so you have to wade through the junk. Imagine paying for that experience...


They did try to pull a Valve and use their successful game to fund a game store that prints money.


The 9800X3D has wider everything. Decoder, execution ports, vectors, cache, memory bandwidth...


I think my i9 was released right after the Spectre and Meltdown mitigations in 2019, but I seem to remember even more recent vulns in that family… so that could also be a factor.


If you get the "You're absolutely right!" response from an LLM that screwed up on a field you're familiar with and still let them play with your health, you're...courageous to say the least.


What you're describing is called Gell-Mann Amnesia.


You'd have to be quite tall to average an ~80cm step. 193cm (6'4") according to a quick search.


The original unit was the [slightly smaller] Roman mile which was standardized with the military in mind, i.e. able-bodied men in their prime. Seems like the average for men today is 2.5 feet or so which is more or less still on the money for 2k steps/1k paces to a mile.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: