I implemented a system recently that is a drop in replacement for a component of ours, old used 250gb of memory, new one uses 6gb, exact same from the outside.
Bad code is bad code, poor choices are poor choices — but I think it’s often times pretty fair to judge things harshly on resource usage sometimes.
Sure, but if you’re talking about 250GB of memory then you’re clearly discussing edge cases vs normal software running on an average persons computer. ;)
I still don’t really get the argument, like okay this extremely rich theoretical attacker can obtain the private key for the cert my service uses, and somehow they’re able to sniff my traffic and could then somehow extract creds. But that doesn’t give them my 2fa which is needed to book each transaction, and as soon as these attacks are in the wild anti fraud/surveillance systems will be in much harder mode.
I don’t see QC coming as meaning bank accounts will be emptied.
My bank definitely doesn't require 2FA on every transaction. It only requires it to log in. I guess other people have more security concious banks then me.
Even still, i think there is some benefit to attackers being able to passively monitor connections. Getting the info neccesary to conduct some other type of fraud outside of the system. Lots of frauds live or die on knowing enough about the victim's financial situation.
However it really doesn't matter, when it happens we will just switch to different encryption.
It’s turning into a bit of a grift now. So many crypto agility “consultants “ popping up with their slop graphics. Never mind the fact that even if a relevant quantum computer is built it will still cost the user millions of dollars to break each RSA key pair…
I dont neccesarily think it would cost millions per key pair. Hard to say with the technology so immature, but it seems like the sort of thing with huge upfront costs but low marginal costs. Once you have a QC you dont have to build a new one for the next key pair.
I work for a pretty big one and we’ve got an exacc or twelve.
Regulatory thing for us, some workloads need production support for the data tier for various boring legal and compliance reasons, so our choices are kida limited to oracle and, these days, mongo, who have made massive inroads to enterprise in the last couple years.
As an IBM hobbyist user, picture something worse than VMS in 'hackerdom'. IBM's mainframe OSes are like NT/OS2 taken to the total extreme with objects, because by default you don't see files but objects which might have files... or not.
Imagine the antithesis of Emacs. That's an IBM environment with 3270 terminals and obtuse commands to learn.
I missed my oasis dearly but I couldn’t wait anymore and got a Kobo Libre (not sure the exact model, th color one); it’s pretty much as good, only thing i miss is the dual battery system.
Koreader is well supported and has all the features you mention.
You should be fine if you have an undergrad degree in math, engineering, physics, or some other hard science and have taken a course in advanced calculus/baby real analysis -- whatever it is that they call "calculus with proofs" nowadays. They try to hide all the algebraic geometry by talking about zero sets of systems of polynomial equations. Whether you are talking about "an algebraic variety", "manifold", curve, surface, etc, it's just the zero set of polynomials of possibly one, two, three dimensions. The fact that it is the zero set of a polynomial equation means you have nice properties -- that is what algebraic geometry is about -- studying these zero sets. And this paper tries to resolve singularities in the zero sets.
That said, papers like these have to be read slowly and carefully. If you need some help with understanding what this is about, the basic idea is that you want everything to be non-singular - no vanishing derivative, no undefined derivative, no self-crossing, etc. Locally, every one-dimensional curve should be a flat line, every two dimensional surface a plane, etc. Most of the theorems apply to non-singular varieties.
In the real world, things are singular at "bad" places, and you have to deal with that. So what you do is try to find a regularization, or a variety that is close to the one that you want to study, but with the singular parts smoothed out, and then a nice map from the smooth variety to the singular variety that hopefully does not change too much so that you can prove your result for the singular variety, and then see how it changes under your map.
In practice, these maps are called blowups and blow-downs. E.g. if you take a curve that crosses back on itself in the plane, then embed the curve into three dimensional space and pull it apart so in the three dimensional space it does not cross itself. Then the projection back onto the plane is your map from the non-singular to the singular curve. If you can always do this (and you can) then you can think of singular curves as "shadows of non-singular curves"
Algebraically, you can think of it this way:
take a nodal cubic that crosses itself at the origin:
```y^2 = x^2(x+1)```
This has a singular point. We want to make a new curve without this singular point, and a nice projection map from the new curve to the old. We do this with two charts.
The first chart is when x is not zero.
```
chart 1: y = ux
Then we have u^2x^2 = x^2(x+1)
and we can factor out x^2 to get u^2 = x+1 in the (u,x) plane and the special solution x^2 = 0.
```
The second chart is when y is not zero
```
chart 2: x = vy
Then we have 1- v^2 - v^2 = 0 in the (y, v) plane and the special solution y^2 = 0
```
The two charts can be glued together nicely in the area where they are both defined by the transforms x = vy, y = ux (or v -> 1/u, x ->y), so the larger curve lives in the union of both charts. These charts and their transitions are projective space as they represent the set of lines in the plane, so a blow up is always done in projective space (this generates to multiple variables easily).
Now this larger curve reduces to the original curve at the point u = 1 in one chart and v = 1 in the other chart. Our blow down map consists of substituting u = 1 or v = 1).
The special case solutions x^2 = 0 in chart 1 and y^2 = 0 in chart two are the "exceptional divisor" or a portion of the non-singular curve that is codimension one and is compressed to zero dimensions by the blow-down map. Geometrically, when you project from three dimensions to two dimensions you flatten one dimension, that flattened dimension is the exceptional divisor. It also defines the map, because the flattened dimension determines the projection.
The exceptional divisor is exactly the location of points where your blow down map is not "smooth".
But one blow up at a point usually isn't enough. If you have higher order zeroes, you may need multiple blow ups, and you can have many degenerate points and need to blow up many times. So the idea is to keep blowing things up until you get something smooth. And then you compose all your previous blow up maps to get your blow down map from your smooth variety back to your singular variety.
This was all known in the 19th Century.
Where this approach may fail is that it's possible to blow something up but not have the result be smoothed out, and this happens when the exceptional divisors do not have smooth crossings. So Hironaka needed to prove that there always exists exceptional divisors with smooth crossings, which is why Hironaka needed to track all the other divisors and use dimensionality arguments to show that there exists a transverse divisor to all of them. Additionally, he needed to find smooth singular surfaces to blow up, which is where his "hypersurface of maximal contact" approach fits in, and this is done via induction on the dimension. These are the primary obstacles Hironaka overcame and are covered in detail in the paper.
The happy path for me is with erlang, due to the concurrency model the blast radius of an error is exceptionally small, so the programming style is to let things crash if they go wrong. So, really you are writing the happy path code only (most of the time). Combine this approach with some very robust tests (does this thing pass the tests / behave how we need it to?) then you’re close to the point of not really caring about the implementation at all.
Of course, i still do, but i could see not caring being possible down the road with such architectures..
Eh maybe. I work on a big, mature, production erlang system which has millions of processes per cluster and while the author is right in theory, these are quite extreme edge cases and i’ve never tripped over them.
Sure, if you design a shit system that depends on ETS for shares state there are dangers, so maybe don’t do that?
I’d still rather be writing this system in erlang than in another language, where the footguns are bigger.
Generally agree, all the problems i’ve had with erlang have been related to full mailboxes or having one process type handling too many kinds of different messages etc.
These are manageable, but i really really stress and soak test my releases (max possible load / redline for 48+ hours) before they go out and since doing that things have been fairly fine, you can usually spot such issues in your metrics by doing that
There are a bunch for sure! Turns out writing concurrent reliable distributed systems is really hard. I’ve not found anything else that makes them easier to deal with than BEAM though.
I’d switch if something better came along and happened to also be as battle hardened. I’ll be waiting a while i think.
Yeah, but the current administration making their own people's lives worse by starting a war and inviting attacks such as this one, wouldn't manufacture any consent for those things.
If anything, it would manufacture opposition. The US general public blames the administration for any negative consequences resulting from the administration's war of choice: Attacks, high energy prices, further loss of US credibility, etc.
https://archive.org/details/IlluminationRadio
Pick an episode with your rng of choice.
reply