Hacker Newsnew | past | comments | ask | show | jobs | submit | fyhn's commentslogin

I've gotten used to this essential feature too via Semaphore CI, and I just can't stand not being able to SSH into a GitHub Action. Debugging is so slow.


I've seen people spend something like 2 hours fixing something that can be fixed in minutes if you had a normal feedback cycle instead of the 5 minute "change > commit > push > wait > see results" feedback cycle GitHub Action forces people into. It's baffling until you realize Microsoft charges per usage, so why fix it? I guess the baffling part is how developers put up with it anyways.


Does not sound like a GitHub failure, sounds it is the company's failure. They haven't invested in the developer experience and they have developers who cannot run stuff locally and are having to push to CI in order to get feedback.


You can't run a GitHub CI pipeline locally (in general; there are some projects to try but they're limited). Even if you make as much of it runnable locally as possible (which you should) you're inevitably going to end up debugging some stuff by making commits and pushing them. Release automation. Test reporting. Artifact upload. Pipeline triggers. Permissions.

Count yourself lucky you've never had to deal with any of that!


Yes there are a few things you can't do locally. But the vast majority of complaints I see 90%+ are for builds/tests etc that should have the same local feedback loops. CI shouldn't be anything special, it should be a 'shell as a service' with some privileged credentials for pushing artefacts.

> Release automation. Test reporting. Artifact upload.

Those I can actually all do locally for my open source projects on GitHub, if I the correct credentials in my env. It is all automated(which I developed/tested locally) but I can break glass if needed.


> Those I can actually all do locally for my open source projects on GitHub

Maybe I wasn't clear enough in my description, but you definitely can't locally do things like automatically creating a release in a Github workflow, sending test results as a comment to PRs automatically and uploading CI pipeline artifacts locally. Those all intrinsically require running in Github CI.


I agree there is stuff you can't test locally, but in my experience people most of the time are complaining about stuff they should have local feedback loops for such as compiling, testing, end to end testing etc.

You give some good examples and I agree they is CI specific stuff that can only be really tested on CI, but it a subset of what I generally see people complaining about.

> can't locally do things like automatically creating a release in a Github workflow, sending test results as a comment to PRs automatically and uploading CI pipeline artifacts locally.

> uploading CI pipeline artifacts locally

I actually testing this locally before opening up a pull request to add it. I just have my workflow call out to a make target, so I can do the same locally if I have the right credentials using the same make target.

E.g. this workflow trigger on a release.

```yaml name: Continuous Delivery (CD)

on: release: types: [published]

# https://docs.github.com/en/actions/using-jobs/assigning-perm... permissions: contents: write packages: write

jobs: publish-binary: name: Publish Binary runs-on: ${{ matrix.architecture }} strategy: matrix: architecture: [ubuntu-24.04, ubuntu-24.04-arm] steps: - name: Checkout code. uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1 - name: Setup Nix. uses: cachix/install-nix-action@4e002c8ec80594ecd40e759629461e26c8abed15 # v31.9.0 - name: Publish binary. run: nix develop -c make publish-binary RELEASE="${GITHUB_REF_NAME}" env: GH_TOKEN: ${{ secrets.GITHUB_TOKEN }} # This token is provided by GitHub Actions. ```

Which after building the binary calls this script

```bash #!/usr/bin/env sh

set -o errexit set -o xtrace

if [ "$#" -ne 2 ]; then echo "Usage: $0 RELEASE_TAG TARGET" echo "$#" exit 1 fi

RELEASE="$1" TARGET="$2"

tar -czvf "${TARGET}.tar.gz" -C "target/${TARGET}/release" "clean_git_history" gh release upload "${RELEASE}" "${TARGET}.tar.gz" rm "${TARGET}.tar.gz" ```

So I was able to test large parts of this locally first via `make publish-binary RELEASE="test-release"`.


Can't do much about that when there is something you're troubleshooting about the CI platform itself. Say you're troubleshooting why the deployment doesn't work, somehow got the environment variable wrong for whatever reason. So you edit and add a "env | sort" before that, commit it, push it, so on. With "rebuilt with ssh", you literally are inside the "job" as it runs.


Yes you can't really debug CI specific stuff locally, like if your setting up build caching or something. But it seems like 90%+ of the time people are complaining about builds/tests that should have local feedback loops.


Yeah, fair point, I see that a lot in the wild too. I guess I kind of assumed we all here had internalized the practice of isolating everything into one command that runs remotely, like "make test" or whatever, rather than what some people do and put entire shellscripts-but-yaml in their pipeline configs.


Yeah everytime I see logic in YAML I cringe. Trying at work to get people to use a task runner or even call out to scripts was a fight...


Not all though, I've been looking at Minimed pump reverse engineering (which would be just reading glucose data, not controlling the pump), and that's not solved yet, at least not for the 780G. But I hope it will be, and perhaps I'll be able to contribute.


I don't work for Medtronic. But it's extremely unlikely that will happen. It's not merely a matter of reverse engineering -- after the original medtronic "hack" / reverse engineer efforts (the ones that lead to the original openAPS system being developed) the FDA put out new guidance on cybersecurity protections for insulin pumps.

The communication between your phone/pump or glucose sensor/pump is encrypted now for all newer devices.

> Diabetic companies like Insulet have been very lax when it’s come to the hacking of their devices

Absolutely not true, not any more.


No it's true. Companies like Insulet and Dexcom could send out lawsuits to all the open source projects out there that involved REing. Dexcom's glucose share API was REed years ago, and Dexcom hasn't even tried updating or stopping the use of unofficial APIs. All I'm saying is that the companies really don't care at all.


> The communication between your phone/pump or glucose sensor/pump is encrypted now for all newer devices.

May I ask where did you get this info? And what “newer” means here?


This person is referring to this guidance document: https://www.fda.gov/medical-devices/digital-health-center-ex...

I'm a medical device developer working on this exact problem (glucose control)


Thanks!


Some password managers like 1Password can do the two-factor stuff for you, so you don't have to pull out your phone. On the fully supported pages it'll just autofill your username, main password, and one-time password.


Just as a reminder. If you save the 2FA token in the same password database as the actual password of the website you effectively neutralized 2FA or at the very least weakened it.


If you store your password in a password manager, is it accurate to still frame it as 'something you know'? Or is it just another 'something you have'?


The only app I have that I actually like. It's great.


Those of you who use ROS in production, do y'all use ROS 1 or 2? Do you maintain your own fork? I'm curious how people do this, with the upcoming Noetic deprecation.


Where I was, 1 was standard, 2 experimental and nothing worked really. Slowly a half assed, bug ridden internal implementation of ROS 2 was started… a sh*tshow to be honest. The discontinuity between R1 and R2 was for me just unacceptable, unprofessional and just awful.

If I could decide (and in the area where I am, we did) I would ditch the whole thing. After all, if you squint, ROS is a collection of things:

- a launcher (which is a very bad scripting language embedded in horrible XML). Can be very easy be substituted by some python or shell scripts. - a description language of messages, that can be read by C, Python and Lisp; can be substituted by raw sockets or google protocol buffers or whatever. - a parameter/configuration distribution system, which can be implemented based on libconfig (https://github.com/hyperrealm/libconfig)

All that options are pretty much standard, stable and well supported, with bindings for any mainstream language.

I would run away from ROS2 to avoid another disaster when ROS3 comes.


> After all, if you squint, ROS is a collection of things:

ROS came about because before ROS existed, robotics researchers cobbled together ad-hoc ROS-like systems using the tools you point out. ROS itself started as a project called "Switchyard", which was built to operate one of Stanford's robotics projects. Research labs around the country each had their own take on this kind of system, which made sharing research very difficult. What ROS did was standardize the platforms between all labs, enabling us to share our algorithms, which at the time mostly revolved around localization, mapping, and path planning.


hi, I've dabbled in robotics from a hobbyist perspective, and I've gotten as far as installing ROS and following some tutorial and online courses a few times.

I agree with what you said about launcher/ipc/config - my plan if I ever use Ros is to keep it in a box and use my own communication layer to connect with things in various environments (rather than trying to solve incompatible conda environments).

however one thing you've missed, which may well be the biggest offer from Ros and the reason why I haven't sworn off it entirely is the library of robotics functionality such as SLAM and various planning algorithms.

in your opinion, are these at least well implemented? would they be easy to rip out and run separately?


You said it much better “ launcher/ipc/config”

My suggestion: start with ROS, but make absolutely sure to separate ROS from your “business intelligence“. The one think I hate from ROS, is that it makes it difficult. But keep them separated. Also the launch IPC and config parts, don’t do them dependent from each other. If possible do not rely on launcher. So later you can switch away if needed. And you will need. ROS is great for prototyping, changing things, development. But is just not for deploying in production. The whole possibility of inspecting the messages between nodes, means you have to pay a price.

> in your opinion, are these at least well implemented? would they be easy to rip out and run separately?

Not a simple answer: - from the architecture pov i think many questionable decisions were made: you need to master many languages: XML, launch, message files, yaml, python or C, Cmakefiles for catkin… why?! Why not making launcher files, messages all just yaml or json? Just too complex for nothing. - The code, you can look yourself, is write only. No way you can change anything without needing 1 year understanding… - BUT: it does work, and works well. I’ve not found a kill bug or something. So, I guess is ok.


thank you!


Well, if 9/10 of a thing is a dumpster fire, would you expect the last 1/10th to not be?


I run a robotics dev tools company and we work closely with companies of all sizes across the robotics industry, so have a unique perspective on this.

I would say about half of the industry (read: for-profit robotics startups from early stages through to 10,000s of robots in production) uses ROS, and of those probably 2/3 are on ROS 2 at this point, and the rest are in some stage of migration. ROS 2 solved a lot of problems that didn't need solving, but like it or not ROS 1 is abandonware at this point so almost no one is planning to stick with ROS 1 longer than they have to.

Most companies aren't maintaining their own fork of ROS (other than to submit patches upstream), but a small number of companies on ROS 1 forked it and diverged so significantly that there is no point trying to rebase on ROS 2 - of these GM Cruise was the most notable, although the future of their stack is unknown now that GM canceled the robotaxi project.

The other half of the industry uses some sort of in-house stack built from parts (compare batteries-included frameworks like rails/django to building a backend using libraries like expressjs and sequelize). There is usually some form of pub/sub messaging architecture, because pub/sub is a natural fit for robotics and makes it easier to log and replay/resimulate. Some common things I see are zeromq, vanilla DDS (no ROS), zenoh, or write their own pub/sub (sometimes using shared memory). The messages themselves are often protobuf, flatbuffer, cbor, json, or sometimes just raw c structs.

Building your own stack isn't hard, but its much easier if you have used ROS before and know which concepts you want to reuse, rather than reinventing everything from first principles.

Some newer robotics frameworks are also starting to spring up which is great to see, for example https://github.com/copper-project/copper-rs and https://github.com/dora-rs/dora

There are also frameworks more specifically targeted towards robot learning, for example https://github.com/huggingface/lerobot

If people are just starting out in robotics, or just starting a robotics company, I still recommend ROS despite its warts, it is worth learning because it has had such a big influence on the current ecosystem. There is no "right answer" though, many companies have been successful with each approach.


What's that old robotics industry saying?

"You either die trying to scale ROS to production, or you live long enough to repeatedly reinvent it?" - Johnny 5


We use ros1 today. We have a very slow moving effort to migrate things to ros2 that I'm not really involved with.

A previous job used ros2. IMO, it's worse in most ways compared to ros1. The launch system of ros2 alone is enough reason not to use it.


You treat it like the metaphorical toxic waste it is: avoid it at all costs. You don't need ROS to get a dependency manager, a build system, a middleware, and/or a process manager.


I'd imagine it runs the gamut. ROS for some older places; those starting fresh are probably doing ROS 2.


I've been fiddling a lot with getting my CGM glucose data on a smartwatch, in a way that I like, recently. I'm in the middle of a small series of posts on my silly blog here: https://fyhn.lol/blog/glucose-watch/


I love the font. Maybe particularly because it's kind of big, so you can see the details. It seems to be EB Garamond: https://en.wikipedia.org/wiki/EB_Garamond


Oh yeah, EB Garamond is doing the heavy lifting for me, design-wise, no doubt about that:)

I use it on sonnet.io, consulting.sonnet.io and sit.sonnet.io I didn't use it on days.sonnet.io only because I felt weirdly obsessed with performance when I was coding it.


This is very interesting, and I (T1D) would like to learn more. However, the e-mail signup doesn't work for me. ("Oops! Something went wrong while submitting the form.")


Fairly common in Norway


Home: A 13-inch Asus Zenbook from 2014 (UX32LN) now running Ubuntu (MATE) 22.04. It's still good!

Work: A recent 13-inch Dell XPS that shipped with Ubuntu 22.04. It's also great, but the function key row has been replaced with touch buttons which is not good at all (including the escape and delete keys).

Work, previously: A 13-inch Dell Latitude from 2017. I originally ran 16.04 on it, then 18.04 until the above replaced it. Worked very well. Had every port you can wish for, including ethernet. Eventually the battery swelled and the keyboard stopped working.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: