28 Comments
Mar 10, 2023Liked by Alberto Romero

Good one Alberto, I like your level headed take between the extremes. For my less level headed take, we don't have to choose between blaming bad users or the companies, we can blame them both.

In defense of current AI, it could be that our blame game instincts, mine included, are seriously out of whack. Just a few days ago I wrote an article related to yours entitled "Exploring The Strange Phenomena Of Outrage".

https://www.tannytalk.com/p/exploring-the-strange-phenomena-of

In that article the question essentially was, who is to blame for tobacco deaths, smokers or the tobacco companies? I acknowledge that each of us is responsible for our own choices, and then come down hard on the tobacco industry.

Let's establish some context for our concerns about AI.

Did you know that the tobacco companies kill almost as many Americans EVERY YEAR as were killed in all the wars Americans fought in over the last century? The CDC puts the yearly death toll at around 480,000.

Seeing that is making me wonder why I hang out on AI blogs wringing my hands about chatbots. Have chatbots killed a single person yet?

It's interesting how we choose what to get all worked up about. I don't claim to know how that works, but it does seem that a cool headed logical analysis is not a big part of the process.

Expand full comment

Great article and I'm really liking these takes. There was a book way back in 2001 or thereabouts called "Mac OS: The Missing Manual", and maybe AI needs something like that (paging O'Reilly....). The feedback I sent after using Bing Chat was that there should be a fun intro video before Chat access is granted, with someone like Hank Green explaining in regular-person terms what deep learning is and how the model works.

Expand full comment
Mar 11, 2023Liked by Alberto Romero

I appreciate your perspective, Alberto. I think the "manual" for LLM ask is a little unreasonable given the inherent flexibility and openness of what these applications and models can do. However, I think there is something to be explored in the way of "templates". Other modern applications with open and flexible systems have deployed templates to guide users into more locked-in and defined use cases. When it works, both users and the companies who own the applications win (Miro, Zapier, Canva, etc). It's a way to encourage more "coloring inside the lines" and provide a shorter path to value for consumers without completely locking down the openness of the system (which, to me, is part of the beauty of generative AI).

This protection/guard-rails concept is a whole other story on the developer tools side for folks connecting and building their own applications on top of OpenAI, Stability, etc. Who is held accountable there? The foundation model providers? The application layer? The user? Hard to say right now.

Expand full comment

I much enjoyed the article. Regulation should enforce clarity: openness on the system resorting to fabricated material to maintain the semblance of a coherent whole, openness on if and when humans take over in giving responses. Users should have such facts up front. Throwing mud at the wall to see what sticks comes at an expense to society. Clearly the systems have a lot to offer if channelled to clear use-cases.

Expand full comment
Mar 11, 2023Liked by Alberto Romero

Alberto, you wrote a fascinating article, thank you. I see the logic behind your argument however I wonder about the actual harm in question. It would be interesting to have you describe actual harm and also link the harm to a chatbot (causality). There has been a lot of hand wringing since ChatGPT went live in November and articles galore about its potential for theoretical good and bad with supporters on both sides marking interesting arguments, like yours.

Have we seen actual harm? Have we been able to conclude that the cause of the harm was an LLM? With all of the content on the web, again, both good an bad, accessible to all of us and indexed (since at least 1991) and accessible by search engines right from the beginning (remember Alta Vista and chat rooms?) where is the historic harm and who is responsible? I think that we see some causality in the context of radicalization of youth via YouTube and the use of other social media but there we focus on the people who are posting using the tool. Not the tool itself, although this too is a moving target (s. 230 SCOTUS anyone). Look forward to more of your writing.

Cheers

Expand full comment
Mar 11, 2023Liked by Alberto Romero

Excellent analysis. Question, isn’t placing the regulatory burden on (presumably) government a throwback to antiquated industrial ideals? Regulatory oversight fit into the age of factory workers and rail builders. But this is a brave new world. Tech giants can’t afford to be held up in R&D for perpetuity because that just opens the door to unscrupulous rivals. First mover advantage can’t be denied [devil’s advocate here].

Expand full comment

To continue with your automotive metaphors, so far the only harm caused by generative AIs comes from drivers who knowingly, and insistently, drove several times towards the precipice and then provided screenshots that it is possible to have an accident.

Expand full comment

While I agree with the general direction of this post (it would be important for AI developers to publish a manual with limitations and such), the car example is completely missing the point.

Car producers have never put out in the market products that are 100% safe and tested with full knowledge of the limitations.

Just consider that until the 90s it was not compulsory to use belts, or the fact that the vast majority of crash tests are done with mannequins that reproduce only the male body, making airbags and safety belts pretty much less safe for women.

Most recently car producers have put in the market vehicles with features such as GPS images on the screeenshield (which has been shown distracting the driver and impeding the view), or steering wheels of weird shapes because of esthetic reasons, that then turned out to make the vehicle less safe. All this without even mentioning the chaos of autonomous vehicles and exploding batteries.

Cars are and have been in the past put in the market with some tests and some regulations, but among all the examples of how to make a product safe, I think this is the worst possible example.

Expand full comment

The purpose of ChatGPT is to create words when prompted.

1. ChatGPT should come with a warning that it might create offensive speech and if the user doesn’t want to risk being offended, don’t prompt ChatGPT

2. If the user decides to publish said words, they are the users words and all relevant social and legal constraints of speech apply

Problem solved?

Expand full comment
Removed (Banned)Mar 11, 2023Liked by Alberto Romero
Comment removed
Expand full comment