Hacker Newsnew | past | comments | ask | show | jobs | submit | fourside's commentslogin

I’ve heard this is very airport dependent

Low income and liberal is usually code for certain “undesirables” that conservatives tend to dislike. Better watch what LLM your kids use or they might end up speaking Spanish and listening to rap ;).

It's not about liking / disliking, but conservatives tend to prefer staying together even if it's a bad relatioship, and liberals prefer splitting by default if there are serious problems.

The syncopath style is clearly categorized as more liberal (do what you feel is good).


Does that explain Trump's numerous wives?

Reading your comments is a wonderland of right wing bias.


Eh, or grow up hating American and thinking they need to fly to Cuba to explain to the people are great communism is for them. Who knows.

I understand the concern but then to make this available for adults you now have to provide proof of age to companies, which opens up another can of privacy worms.

Theoretically we don't actually need proof of age. Websites need to know when the user is attempting to create an account or log in from a child-locked device. Parents need to make sure their kids only have child-locked devices. Vendors need to make sure they don't sell unlocked devices to kids.

> Theoretically we don't actually need proof of age. Websites need to know when the user is attempting to create an account or log in from a child-locked device. Parents need to make sure their kids only have child-locked devices. Vendors need to make sure they don't sell unlocked devices to kids.

Given how current parental controls work, kids are not getting access if their device is under parental control (the default for open web access is off). So Facebook still won't see any child-locked devices, even before this ruling. My guess is that this ruling applies to parents who aren't making sure their kids get access only via child locked devices.


The actual problem is that there are parents, I even remember them growing up, who do not care what their kid is exposed to and won't flinch at anything. I'm sure most here had a "Jeff's mom" who didn't care if you guys were playing mortal combat while blasting Wu Tang at 9 years old.

So even if 95% of kids have responsible parents locking down access, there will still be this 5% that will continue to drip horror stories that motivate knee-jerk regulation.


Exactly.

Trying to approach it from the direction of websites determining if you are an adult is a privacy nightmare and provides a huge attack surface. (Which is what the government wants--the ability to monitor.) Flipping it over is much, much safer--but fails the real mission of exposing dissent.

(On-device security, the credential of the adult is loaded onto the device but not transmitted anywhere, it can only be obtained locally. The device simply responds as to whether it has a credential loaded. Bad guys are unlikely to want to sell such devices as the phone could be traced back to them.)

And the parents can select a strict child lock, or permitted but copies forwarded to the parent.)


Children do not want child locked devices and they will find alternatives

As with smoking, alcohol, sex, drugs etc

Children who are smart enough to get access to a given vice without getting caught are more likely to be mature enough to be able to cope with that vice.


Sorry what?

Kids with low parental supervision who steal uncle Roy's marlboro are more likely to be able to cope with tobacco addiction?

Do you have any reasons to think this might be the case? Studies, research, a well thought-out article?


To get reliable access you either need to convince an adult to give you access (which is always game over) or you need to engage in some kind of future planning, which is a similar skill set as the one necessary to notice that getting addicted to cancer thing might be a bad idea. Stealing uncle Roy's marlboro doesn't work because uncle Roy is generally then going to notice that they're going missing and either start securing them better or deduce where they're going and visit some punishment on the kid.

I mean what if Roy doesn't care?

We're just optimising for kids with shitty family at this point.


If Roy doesn't care then you have a kid with an adult who gives them access, which is the scenario where none of this is going to work. Even if you required government IDs with hourly retina scans, it doesn't work if Roy is willing to let the kids hold the device up to his face whenever they want.

Sure, I agree.

I only disagree with the just-so notion that kids who have an Uncle Roy are somehow better able to cope with the consequences. Ability to access something is (IMHO) pretty uncorrelated with the ability to cope with the consequences.


The original claim wasn't that the kids with an Uncle Roy would be better able to cope, it's that the kids are who can devise another way to get past even if they didn't. Then the latter kids make up a larger proportion of the ones who can get past because they have two paths to do it instead of one. And the former ones are the ones we can't reach regardless.

Let's look at that original claim -

"As with smoking, alcohol, sex, drugs etc

Children who are smart enough to get access to a given vice without getting caught are more likely to be mature enough to be able to cope with that vice."

There are at least two problems here. The one I've focused on first that you seem so keen to dispel, is an assumption that there are smart kids overcoming a challenge. 'Roy' is an extreme, but there is a whole spectrum of low-oversight conditions that are likely to lead to kids getting access to alcohol, tobacco, drugs, having sex etc, which are nothing to do with smartness or challenges and are much more to do with shitty parenting and neglect.

Then there's the second problem. Let's focus on tobacco but I believe it's likely to hold for other drugs - even if we allow that children getting access to tobacco are 'smarter' than those who don't figure it out, and are overcoming various obstacles, that doesn't actually imply that they'll be better able to deal with the consequences. Just like how a high IQ doesn't always mean someone is necessarily good at crossing the road safely or tieing their shoelaces.

In fact there's a variety of research about nicotine's effect on developing brains and how the earlier people are exposed the more likely they are to be more addicted for longer. This is the opposite outcome to the original claim, kids who start earlier are in fact demonstrably less likely to be able to 'cope' with the vice.

The whole claim is nonsense.

[0] https://pmc.ncbi.nlm.nih.gov/articles/PMC3615117/ [1] https://www.tobaccoinaustralia.org.au/chapter-6-addiction/6-...

(edit - I'm not making specific claims about cybersecurity or access to tech here, I just think the analogy is pretty seriously wrong in itself)


> The one I've focused on first that you seem so keen to dispel, is an assumption that there are smart kids overcoming a challenge. 'Roy' is an extreme, but there is a whole spectrum of low-oversight conditions that are likely to lead to kids getting access to alcohol, tobacco, drugs, having sex etc, which are nothing to do with smartness or challenges and are much more to do with shitty parenting.

Let's consider the four combinations of the two variables here. You have dumber and smarter kids, and worse and better parents. The kids with the worse parents will have access to the vice regardless of whether they're dumb or smart, but the kids with the better parents will only have access if they're smart enough to figure out how against parents actively trying to prevent it. Therefore the two of the four quadrants with smarter kids can get access but the dumber kids only can when they have worse parents, implying that two thirds of the quadrants with the ability to do it are the smarter kids.

> even if we allow that children getting access to tobacco are 'smarter' than those who don't figure it out, and are overcoming various obstacles, that doesn't actually imply that they'll be better able to deal with the consequences.

That's assuming the way they deal with it better is by trying the drug and then somehow not getting addicted rather than by choosing not to try the drug to begin with even though they could access it if they wanted to, or otherwise making more measured choices if they do decide to try something, like finding a source more likely to be providing the expected amount of the expected substance instead of who knows how much of who knows what. Or just hesitating a while so their first time comes at an older age.


> implying that two thirds of the quadrants with the ability to do it are the smarter kids.

But only one of those involves overcoming anything.

And unless you have information on the relative sizes of those quadrants, it’s meaningless in terms of the overall picture and being able to confidently assert that access to such contraband allows you to draw any inferences about intelligence whatsoever.

And the rest appears to be some serious mental gymnastics to avoid the point, which I don’t believe for a second was meant to encompass “children who are smart enough to get access to do a thing but don’t actually do the thing because they’re so damn smart”. Nor do I believe that 14 year olds who find a willing drug dealer are more likely to take sensible precautions than their peers, having proven their smarts by finding one!

The whole premise is laughable.


I think we’re going to see how that plays out with gambling.

It seems a bit silly to think security abstinence is the solution.


The issue is not just age verification but also device pinning.

I think the framework here is to have community driven age verifiers( i recall there is an EU effort for digital wallets which besides it's bad parts has some of these good parts) which can verify ages for people and link them to( local biometrically encrypted) devices for pinning. This would be privacy preserving. The only downside is a mandate for all devices have a built-in hardware biometric encryption like a finger/face print so phones can't be just(used) with these apps installed.

The verification part is a job that could be done by all the teachers and coaches and ofc parents. Any one verifying identities would be cryptographically nominated/revoked by a number of more senior members of the community. A prent always get the right to say ok for their kid ofc but so could teachers or legal guardians..

We(legally) need a mandate for smart devices to have local device only biometric verification. The law should be to have these apps follow device app store protocols.


Well then don't give them money to do so, its not like phones grow on trees. If you make selling phone/internet device to a minor under certain threshold an illegal act severely punished by law in same way alcohol and cigarettes are, many cases of access are solved. Also, paid internet subscription doesn't grow on the trees even though there are free wifi networks.

All imperfect solutions, but they slice original huge problem into much smaller chunks which are easier to tackle with next approach.


True, it's never going to be 100%, but at least it's a tractable problem for parents. Enough to change what the culture considers "normal," anyway.

Imperfect solutions are still called "solutions".

Theoretically only

> Surveys by Britain’s tech regulator, Ofcom, find that among children aged 10-12, over half use Snapchat, more than 60% TikTok and more than 70% WhatsApp. All three apps have a notional minimum age of 13.

https://archive.ph/y3pQO


[flagged]


I believe Zuckerberg has a term for people who willingly break online anonymity because someone with a domain name and website asks them to.

Establishments don't record my data or even take down my name. They take a look at the birthdate and wave me forward.

We need a way to do this online.

> Establishments don't record my data or even take down my name.

What are you talking about. Have you really never rented a car before?

Some establishments, as part of their business practice, require identification.


And many don't. Bars, nightclubs, liquor stores, tobacconists, R-rated movies.

We don't see people worried that bars, nightclubs, liquor stores, tobacconists, R-rated movies asking for age verification will slip into requiring names too.

It honestly looks like an emotional panic. People who take seriously slippery slopes aren't to be taken seriously themselves.

Social media is like e-cigarettes in the sense that the shift toward nicotine salts (think Juul) around 2015 resulted in e-cigarettes becoming more dangerous and thus more age-restricted.

It's also like consumer credit cards. Remember that in 1985 Bank of America just mailed out 60,000 unsolicited credit cards to residents of Fresno, CA without application, age verification, or identity check. They just landed in people's mailboxes, including those of minors. Eventually a predatory lending industry developed and we increased the age and ID requirements. My point is that systems can, and do become more dangerous overtime. Not all, but not none.

Algorithmic feeds, online advertising, and attention engineering are the nicotine salts of social media. The product's changed, so should the access.


>We don't see people worried that bars, nightclubs, liquor stores, tobacconists, R-rated movies asking for age verification will slip into requiring names too.

Do we not? Sellers often don't just look at IDs now, they scan them into their system, and naturally, keep and sell your identity info, purchase data, and anything else they have access to.

>Algorithmic feeds, online advertising, and attention engineering are the nicotine salts of social media. The product's changed, so should the access.

This basically makes it clear. The problem is not that children are on social media. The problem is that "social media" has been allowed to become a platform for exploitation and manipulation by their owners. Adults aren't free from this either.


Digital age verification laws I've read also literally specifically ban recording that information, unlike in person. People were arguing with me that companies would decide they need to retain that info for audit purposes when there are no audit requirements and when it's illegal to store it for any reason.

> People who take seriously slippery slopes aren't to be taken seriously themselves

> Eventually a predatory lending industry developed and we increased the age and ID requirements

I have no idea if you're arguing for or against verification. You dismissed the idea that age verification is a slipper slope to more stringent ID requirements. Then provided an example where the exact opposite happened.


I'm not arguing that social media will get worse, I'm arguing that it has gotten worse. A slippery slope argues that something will happen. I'm pointing out that it has happened. Huge difference.

Even more, my point is that rules, regulations, and requirements adapt when these changes become unbearable. That has happened with social media, therefore a change in rules, regulations, and requirements is deserved.


I’m confused by this comment. The original comments talk about Kagi not living up to the hype. You say you’ve had the same experience and wish you could get LLMs to use Kagi for web searches?


Especially odd as that’s exactly what Kagi assistant already does. Maybe they’d just rather use their key than pay Kagi for LLM based search.

On that note, Kagi research is legit amazing. There have been times I’ve spent 30min searching for something without success. As a last resort I asked Kagi research and it found why I could not. More than one option even. Now intend to use almost more than normal search.


Yeah, I agree with other comments here that the traditional search offering has gotten a bit worse (I think because the whole web has gotten worse), but research surfaces great results. Forums, small blogs and websites with authoritative views on subjects I search for. Really great. Yes, it is as expensive as any other AI pro plan unfortunately, but worth it for me.

What’s your reasoning for labeling lawyers as low-exposure?


My partner is a lawyer (prosecutor for a large city). The reason she is at low risk is simply because of the rate of adoption of AI tooling (or ANY tooling for that matter). IT in the public sector (particularly city government) is so much worse than I ever could have imagined before meeting my partner.

Our city just spent >$15MM on "case management software" that took 5 years to build by some fly-by-night outfit in California who won the contract, haphazardly bolted together MSFT Azure components, then vanished with zero support.

These teams can't in good faith freely adopt AI tooling into their workflow because they don't have the bandwidth to do it well, so they don't do it at all.


That's largely based on the original analysis and their methodology. "Responsibility" (only attributable to humans) is one reason, another is that judges probably don't want to speak with robots in court.


Unions would’ve been useful at a time CEO’s are salivating at the idea of slashing jobs and replacing SWEs with AI.


I think it would still be useful. Call my cynical but gone are the days where the individual comp and benefits available to SWEs outweigh the benefits of collective bargaining.


Thanks for chiming in! My takeaways are that, as of today:

- Using a stack your team is familiar with still has value

- Migrating the codebase to another stack still isn’t free

- Ensuring feature and UX parity across platforms still isn’t free. In other words, maintaining different codebases per platform still isn’t free.

- Coding agents are better at certain stacks than others.

Like you said any of these can change.

It’s good to be aware of the nuance in the capabilities of today’s coding agents. I think some people have a hard time absorbing the fact that two things can be true simultaneously: 1) coding agents have made mind bending progress in a short span 2) code is in many ways still not free


You also need a system that is ok with giving you some of said abundance without you working.

Last year the US voted to hand over the reigns, in all branches of government, to a party whose philosophy is to slash government spending and reduce people’s dependence on the government.

To all the US futurists who are fantasizing about a post-scarcity world where we no longer work, I’d like to understand how that fits in with the current political climate.


The thing a lot of people leave out is that literally billions must die for this to happen. In some fully automated world everyone except for a few tens of thousands of the owner class and their technicians will be unneeded. And then what to do?


How did you arrive at that conclusion? Dividing infinity by 1m or 1b doesn't matter if it's really infinite. Just make more machines to make the machines. The existential crisis happens afterwards, and people will kill themselves off without the need for any class warfare at all. In fact the owner class will die first since there will be no more conception of ownership, since everything is supposedly abundant and at your fingertips.


You really believe today's billionaire class will just give up their power over the populace? A world of abundance means the billionaires are irrelevant because everyone would have access to everything and they would never let that happen.

They will hoard the resources, land, anything that is needed for people to stay alive.


Your argument seems to be boiling down to, there's no point to improve quality of life because billionaires are just going to hoard all the improvements.

Surely the problem with that is the billionaires, not the world of abundance though?


Voting for 'indifference to peoples dependence on the government' does not equal 'reduce people's dependence on the government'.

There is zero actual intentional reduction of dependence, just elimination of government support.


It fits because now you can start up the conquering war machine and have a bunch of soldiers who're willing to kill in another country before starving in theirs


I have a hard time telling whether agentic coding tools will take a big bite out of the demand for software consultants. If the market is worried about SaaS because people think companies will use AI to code tools internally vs buying them, I would think the same would apply to consultants.

I’ve seen the code current tools produce if you’re not careful, or if you’re in a domain where training data is scarce. I could see a world where a couple of years from now companies need to bring outside people to fix vibe coded software that managed to gain traction. Hard to tell.


It's a good question. I think short-term (5 years) the easy jobs will go away. No one is going to write a restaurant web site by hand. Maybe the design will still be human-made, but all the code will be templated AI. Imagine every WordPress template customized by AI. That's a whole bunch of jobs that won't exist.

Right now I'm creating clinical trial visualizations for biotech firms. There's some degree of complexity because I have to understand the data schema, the specifics of the clinical trial, and the goals of the scientists. But I firmly believe that AI will be able to handle most of that within 5 years (it may be slower in biotech because of the regulatory requirements).

But I also firmly believe that there is more demand for (good) software today than there are programmers to satisfy it. If programmers become 10x more efficient with AI, that might mean that there will be 10x more programs that need writing.


Yeah the “not invented here” syndrome was considered an anti pattern before the agentic coding boom and I don’t see how these tools make it irrelevant. If you’re starting a business, it’s still likely a distraction if you’re writing all of the components of your stack from scratch. Agentic tools have made development less expensive, but it’s still far from zero. By the author’s admission, they still need to think through all these problems critically, architect them, pick the right patterns. You also have to maintain all this code. That’s a lot of energy that’s not going towards the core of your business.

What I think does change is now you can more easily write components that are tailor made to your problem, and situation. Some of these frameworks are meant to solve problems at varying levels of complexity and need to worry about avoid breaking changes. It’s nice to have the option to develop alternatives that are as sophisticated as your problem needs and not more. But I’m not convinced that it’s always the right choice to build something custom.


I'm not sure.

The cost of replacement-level software drops a lot with agentic coding. And maintenance tasks are similarly much smaller time syncs. When you combine that with the long-standing benefits of inhouse software (customizable to your exact problem, tweakable, often cleaner code because the feature-set can be a lot smaller), I think a lot of previously obvious dependencies become viable to write in house.

It's going to vary a lot by the dependency and scope - obvious owning your own react is a lot different than owning your own leftpad, but to me it feels like there's no way that agentic coding doesn't shift the calculus somewhat. Particularly when agentic coding make a lot of nice-to-have mini-features trivial to add so the developer experience gap between a maintained library and a homegrown solution is smaller than it used to be.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: