Our organization dropped trust of Cloudflare and all it's IP address assignments a while back. We don't allow any data from their networks, CDNs, or A-DNS's to be received by our network.
It is just not worth dealing with Cloudflare at all in a business network.
That essentially means that you can't use any services that happen to be hosted behind Cloudflare, like OpenAI/ChatGPT, GitLab, Hubspot and Shopify. And anyone on WARP and about half of iCloud Private Relay requests won't make it to your services.
I suppose it strongly depends on your organisation, but I'm not seeing how this would be a realistic option unless you're very powerful or have a lot of cash to burn on non-core business processes.
Have you run into any issues yet with Cloudflare customer using their Gateway Zero trust offerings that end up egressing with Cloudflare IPs or how do you plan on handling that as that business grows?
Apple private relay is also fronted by Cloudflare or are actually allowing large amounts of traffic from Clouldflare?
Although a bulk of my career has been in the IT space, I've been a developer for a lot longer. I can tell you, anytime you build software as a means to a "community" it never works out. You don't build software to get a "community", you build software and a "community" will eventually evolve around it organically. A developer should never focus on anything beyond the software itself. When you lose focus of the software you are building, the project will suffer.
yeah - given his comments about hosting containers and what-not, it sounded like he has some community. I'm guessing after 20 years of doing distro's he's just looking for a new itch to scratch. Looking forward to seeing what it is..
Have a look at https://www.funtoo.org/Funtoo_Profiles. Funtoo had a very different way to configure your version of the distribution, especially if you are used to something like Ubuntu. It was a spin on Gentoo, I think later diverging a bit more. And it opted against shipping systemd (ever) and wayland (until tested, say the docs), which is a minus for some and a plus for others, but certainly rare.
There is a reason that anyone who cares about forks being private forever, (even if you delete it) should never use or trust a third party. I never use Github. I run my own git server and everyone else should to in my opinion. Github has always been a huge security problem.
Sadly my data was actually exposed during the hack and I haven't even had AT&T for the last 5 years. So it's pretty bad when I haven't used their service in years. I currently use Ting.
In my opinion, the term "open source" means "the source [code] is open for people to see". You might be able to stretch that to include "and modify" but that is stretching it.
That is all "open source" really means. It's open for people to see/view. Beyond that, is where a license comes in to restrict or give freedom beyond the term of "open source". That is what OSI does. Adds additional requirements within a license. It cannot change the definition of the term "open source" however.
Open comes with the ability to participate, an open door is to let something physical through, not like a transparent window that lets one see, an open market, open auction, or open sports league explicitly implies more than just viewing the activity.
For many of us, we've never used "open" to mean viewing.
Right now I wouldn't trust the license to let me message my boss or buy something from a store on my personal phone without violation, commercial activity is an extremely broad term.
I love how people have this unqualified belief that the OSI somehow owns the rights to define the term "open source". I guess these people don't know that OSI doesn't own any rights to the term "open source" AND when OSI tried to trademark the term, they were rejected. So "open source" doesn't have to mean what OSI says it does. OSI has their own definition of what "open source" means (which I disagree with them on). FUTO and others are well within their right to define "open source" the way they want.
If OSI had the right to define it, then they would be suing FUTO right now. They aren't because they know they can't enforce any definition of "open source".
I wrote an application that sends GPS coordinates to a Web API that plugs into Home Assistant which I use to open and close the garage door when I come home, or leave the house. 2 independent sensors confirm the action completes and if it doesn't, it re-sends the command until it does. Another sensor detects the door from the garage going into the house being opened, which closes the garage door when it was opened from the GPS application.
The application sits on a RPi in the truck powered through the cig lighter plug. I don't know how many hours I spent on it as it was a weekend project I did when I had free time (not every weekend). I can say I got it working in about 4 months though. So however many weekends I had free from that 4 month period is about how long it took. Probably 40-60 hours though.
So efforts would be better served in training materials and perhaps software (clients and such) that is easier to use then what is current present today. Am I taking your response correctly?
I don't think training materials would be necessary. YMMV. Most people get and appreciate the threading model.
Clients could be improved, I guess. Myself, I use a self-improved version of Tass that I originally got with a copy of Coherent (a UNIX Version 7 clone) around 1991. It uses an ncurses 'gui' which I admit I have thought of upgrading from time to time so that it uses a real GUI such as GTK.
However it works well enough, and though I suppose it could do with improving, it works well enough that I can't be bothered spending the time to debug/rewrite it.
My main thrust is that most people have never heard of Usenet and what it is capable of.
I might look into a few clients and see if I can't contribute to make them better. I also prefer a TUI/CLI client. I don't use X11 very often personally.
My biggest consideration for Usenet (and one of the arguments that's been made against it when I've talked to people) is people aren't willing to pay for it. That was the reason I decided to ask the question about maybe doing a reset that would allow for the feed to start over to where people could host their own with minimal storage requirements. The current Usenet continuing for those who wanted it, and the newer one for those who want to either host their own, or use a friend's host or something.
So what is the best way to overcome the "I don't want to pay for it" argument when trying to sell people on the idea of using Usenet?
So what is the best way to overcome the "I don't want to pay for it" argument when trying to sell people on the idea of using Usenet?
Ah, but people are paying for it. They pay with their time and interest in the subject-matter. Their time has value. Their knowledge has value.
This "paying for it" mentality has arisen with the commercial takeover of the Internet. Back when the Internet was young, there was a completely different mentality. It was a 'How can I help the next Man?' mentality. It was quite literally an 'open-source' mentality, and that was because the early Internet was not the domain of the commercial interests, but the domain of the universities and other centres of advanced learning.
My earliest leafnode of Usenet was as an offshoot of the Adelaide University in South Australia. I was part of a UUCP 'store and forward' sub-network. I remember the excitement around 1991 or so when the buzz was that there were now a million nodes on the Internet(!).
I also prefer a TUI/CLI client.
When you're using pure text 99.9% of the time, you might as well use a text-based client. If you have a 'Desktop GUI' you merely run that text-based client in an xterm.
I guess what I mean is, there is a lot of things that Usenet does because back in the 90's we didn't have a lot of the technology we have today. Today there are more solutions then there was which means Usenet doesn't need to be the host for things like binaries, and the like, which tend to be most of the storage consumption problems that would keep people from self-hosting on the existing network. If we started over, not utilizing binary storage and a few other things, would that not make Usenet a better service with less storage requirements? Or am I just missing it completely?
Not sure if this is what you mean, but there are several news servers that only pull the non binary group feeds. Some of them are free. As simonblack mentioned, there would just need to be a significant effort to show people how easy it is to use them. This would include how to filter out spam, or filter in known-good people and content.
As wmf mentioned, you can also host your own node (leafnode or hamster). I have not run them. What limits you put on the server would be up to you and who you replicate to/from would be up to you much in the same high level concept IRC servers can be interconnected assuming the admins of each set of servers agree with each others policies.
Why would this be a bad idea?
As with self hosting any software there is the matter of having a documented policy on acceptable use, how you will remediate illegal material, handle abuse complaints, etc... It is not something that can be set and forgotten about. As to whether or not that is a bad idea depends on how much free time and patience you have. The people you replicate from would have to have to perform the same due diligence as you. So basically the same issues as running chat, forum, wiki, fediverse, shared email, chan and other servers. Running any of these things requires the cognitive fortitude, lack of bias, patience, ability to deal with challenging people and so on. In other words, what dang and team do here 24/7/365.
There is some additional information here [1] on running your own news servers.
It is just not worth dealing with Cloudflare at all in a business network.