This is a large part of what is wrong with American tech today.
Over a decade ago, Meta – then known as Facebook – hired researchers in the social sciences with the goal of analyzing how the social network’s services were impacting users. It was a way for the company and its peers to show they were serious about understanding the benefits and potential risks of their innovations.
But as Meta’s court losses this week illustrate, the researchers’ work can become a liability. Brian Boland, a former Facebook executive who testified in both trials — one in New Mexico and the other in Los Angeles — says the damning findings of Meta’s internal research and documents seemingly contradicted how the company portrayed itself in public. Juries in the two trials determined that Meta inadequately policed its site, putting kids in harm’s way.
Mark Zuckerberg’s company began clamping down on its research teams a few years ago after a Facebook researcher, Frances Haugen, became a prominent whistleblower.
If you parse that "reasonably" the conclusion is that Meta knew their services were addictive and tried to bury it -- and failed, as one of the people doing the work "become a whistleblower." You don't blow a whistle on something the company is transparently disclosing to begin with, do you?
As I've noted all social media -- and all "public" AI -- has the same basic goal: Keep you on the site and thus earning money for them.
In other words addiction is the business model when you get down to it. You might not call it that but a thing is what it does and is designed to do, not what is claimed for it in some marketing material. Marketing is generally not enforceable against a firm due to a legal principle called "puffery" (that is, everyone tries to claim they're the best and this is expected both behaviorally and should be expected by consumers.)
Thus the paradox for so-called public "AI" models: An "AI" that says "I don't know" causes you to stop spending money. After all, why would you pay for no answer? If even some part of the time that's the answer you get back it deters you from spending more money. Therefore it gives you "an answer" (which consumes credits) even if its crap because to make money said program must give you an answer.
All of the arm-waving over "hallucinations" is silly; computers do as they're programmed and the primary purpose of such a system is for people to consume "credits" which they have to acquire (buy) in some form or another. When you distill down the purpose of all publicly-facing sites like this which either sell advertising or direct purchase-for-units-of-use model that's their prime directive since no firm ever tries to make less money.
Contrast this with, say, my old ISP (MCSNet.) We sold access but your price was invariant with use; we had a (very large) cap and other technological means to deter abusive acts like nailing lines open that you were not actively using because those circuits were a scarce (and expensive) resource we had to buy, but other than that we had no financial incentive to increase the number of hours you used the service or the number of emails, articles or pages you read because in point of fact the more you used the more resource you consumed but the amount of money we made from you as a consumer or business user was fixed.
On the other hand Meta's web properties along with all the other social media sites are the exact opposite. They do not charge a direct fee however their earnings are entirely dependent on your eyeballs being screwed to your screen on their site or app because that is what they sell to advertisers and some of them split that revenue with the people generating the views. Yes, the earnings are "indirect" in that other people pay them but what they pay for is your eyeballs screwed to said screen.
If you've not noted the ridiculous and obvious "fake content" spike following the initiation of hostilities with Iran you're blind -- but all that really shows is the shift in focus. It doesn't matter the social media platform; they all can detect this ridiculously-inauthentic content (e.g. alleged things being reported as "I saw this in the US" but the connection comes from Vietnam) but choose not to. Why? Because the more you engage the more money they make and whether the content is authentic is immaterial to them -- so long as you engage. Indeed the more outrageous, either in the dopamine or adrenaline category, some piece of content is the more of a psychological reaction it will probably provoke in you and thus the more-likely you are to engage further on the platform.
There is an entirely-legitimate legal and societal question here: We do not allow minors to purchase things that are potentially addictive or harmful as it is our carefully-considered position that such persons cannot give consent. You can, as an adult, buy all the booze you want and consume it. Most people are not addicted to alcohol, but some do become addicted and that alcohol can cause addiction is known. That most people don't become addicted or experience harm from their use of alcohol doesn't change the law; the potential for such harms, which are known and documented, is sufficient for the law to prohibit minors from purchasing and possessing said things because said minors cannot legally give consent to the potential harms that might occur.
Yet these social media companies -- all of them -- argue that this standard should not apply to their services, and I'm sure the same applies to those peddling "AI".
Indeed even "soft" addiction attempts are part and parcel of corporate behavior. Apple has not and does not give schools large discounts on their computers because they're nice. They do it because they wish to imbue into children the premise that an Apple computer is "desirable" even if for no other reason than familiarity. The goal, of course, is for said person when they become an adult with purchasing power to buy an Apple computer instead of a competing model.
That we all recognize but there's no particular risk of harm to the kid who is incentivized to buy, when they become an adult, a computer that might cost more than another for equivalent capability. Macs have always been more expensive for a given level of performance over PCs; it is simply that PCs have literal hundreds of companies building hardware that all runs the same operating systems and the Macintosh is single-sourced so if you want the Macintosh "look and feel" you must buy from Apple. By the time such potential harm (to your wallet in that you spend more) becomes realized you're an adult and the potential risk from the minor's point of view is both so diffuse and small that we legally ignore it. As a former CEO I disliked this as it resulted in young adults "wanting" said machines to be provided in my workplace (which were vastly more expensive than a PC that could perform the same tasks expected of said workers) but the answer there was simply for me to say "No."
Plenty of apologists argue that in the context of social media and similar (now "AI") this is a parent's role to police. Really? Do we let liquor (or weed, in the states where its legal) retailers sell to kids and rely on the parents to prevent that? Of course not; we recognize that while minors will obtain access illicitly to various things making money by marketing and selling said things to minors is a prohibited act and subject to both civil and criminal penalties. Indeed two days ago as a 62 year old man with rather-obvious gray hair I was forced to present government ID in a local store proving I was of legal age to buy a six pack of beer!
Well?
This isn't about "rights" in any meaningful sense for the simple reason that a minor cannot legally give informed consent, and in some cases we even go so far as to not permit the parents or others who are "in loco parentis" to consent for them (e.g. car seats for children.)
“AI companies seem to be mostly studying the models themselves – model behavior, model interpretability, and alignment – but there is a significant gap in research regarding the impact of chatbots and digital assistants on child development,” Blocker said. “AI companies have a chance to not repeat the mistakes of the past – we urgently need to establish systems of transparency and access that share what these companies know about their platforms with the public and support further independent evaluation.”
No such study is required.
These firms -- all of them -- design their products to increase their use because that is inherently necessary for them to exist and grow their userbase and thus revenue, earnings and market cap.
You need know nothing more than that which is inherently the core design of anything that is "metered" in this fashion (e.g. AI "tokens" expended by consumption) or has value inherently determined by frequency and intensity of use. All such services and products are designed to addict, on a psychological (or possibly physical, in the case of a good) the user because at their core their revenue and thus potential profit is derived solely from your frequency of use.
The legal (and political) debate is whether minors should be able to lawfully access any such product or service outside of the direct observation and consent with each use of their parents.
A reasonable position is that no, only adults may choose that which has as its primary goal the increase in consumption of said thing no matter what it is -- in this case, all social media and "AI" type public tools for which advertising or other use-sensitive value (including fees, credits or similar expenditures) accrues to the owning entity. Yes, this applies to "pay for thing" games where spending or time in use accrues benefit to the user and similar too.
How to enforce it? Simple: For users signing up or accessing inside the United States you must pay for the original account in some capacity that can be validated as only held by an adult (e.g. a credit card.) The amount is not material, but that you must prove you are an adult via some reasonable proxy means is. Yes, some minors will cheat (e.g. steal Mom's credit card) just as they do with fake IDs but the firm is required to take reasonable steps to verify compliance. For example, if someone signs up using a VPN and then makes access from the US evidencing that their VPN usage was for the purpose of concealing domicile the account is to be suspended. If the holder of the card disputes the transaction the account is banned, and so on. This is easy; there are inexpensive databases from which "where did that connection come" verification can be obtained and I use one of them here. Is it perfect? No, nothing is but it demonstrates intent and reasonable attempts and that's the point.
For those who say that's "unreasonable" explain why Meta, when after a decade or so post-voluntary closure of my Facebook account by myself (not them due to some "violation") when I tried to set up another one they claimed I was "inauthentic" despite the email address I used for registration being one that has been under my exclusive control for two decades, and when challenged they upheld their "finding" as final with no means of dispute or manual review by an identified person who I could speak with. My purpose for signing back up was to view and interact with Marketplace, not for other general use, and I can reasonably surmise they know exactly who I am, they also know that I have written many pieces for publication that are very critical of the firm and rather than simply state "we refuse service to you because we dislike you and do not want you on our property" they lied. Well, if they can do that then they can certainly block, with a high (but not perfect) degree of reliability, use by those who are not of legal age.