chatgpt

Back Open Paginator
19.08.2025 14:51
2025 (@2025@civic.io)

The Third Wave of Government Disruption

When printed telephone directories first started including blue pages for government offices, in the 1950s and 60s, they created a new expectation: citizens should be able to reach their government by phone. The Internet revolution of the 1990s raised these expectations exponentially—if you could bank online and shop on Amazon, why couldn’t you renew your license or apply for benefits with the same ease?

Now, with 34% of U.S. adults having used ChatGPT—roughly double the share since 2023—we’re witnessing the third major wave of technology-driven transformation in how citizens expect to interact with their government. And once again, we’re watching the same pattern unfold: rapid consumer adoption creating new expectations, followed by delayed government adaptation, followed (potentially) by a long period of playing catch-up.

The difference this time? The stakes are higher, the pace is faster, and the consequences of falling behind may be more severe than ever.

Three Waves of Technological Disruption

Each wave of technological change outlined above has followed a similar trajectory, but with accelerating speed:

The telephone era unfolded over decades. Telephone adoption began in the late 1870s as an expensive luxury for the wealthy, with monthly costs of $20-40 (equivalent to $500-1,000 today). It took until the mid-20th century for phones to become commonplace in households. Governments had time to establish call centers and phone-based services without fundamentally redesigning how it operated. The pace was manageable—measured in decades, not in years or months.

The Internet era compressed this timeline to years. Internet users exploded from 45 million in 1996 to 407 million by 2000—a ninefold increase in just four years. Citizens who could accomplish complex tasks online in minutes naturally expected similar efficiency from their government. But while private companies were redesigning their entire business models around digital capabilities, governments largely treated the Internet as a new channel for existing processes.

The AI era is compressing change to months. Generative AI has been adopted at a faster pace than PCs or the internet, with breakthroughs moving from laboratory to widespread deployment in timeframes that would have seemed impossible just a few years ago.

The Structural Challenge: Democracy vs. Speed

As I’ve written about extensively before, governments aren’t slow at technology adoption by accident—they’re designed that way. The very features that are intended to make democratic government more trustworthy and accountable also make it structurally unsuited for rapid technological change.

The classic example is government procurement. The average technology buying cycle for government is 22 months compared to 6-7 months in the private sector. These delays aren’t the result of bureaucratic incompetence—they’re the deliberate result of requirements designed to help ensure fairness, transparency, and accountability. Public bid posting periods, vendor diversity requirements, the acquisition of performance bonds, and detailed financial scrutiny all represent important values imbued in public procurement processes. But they can also add months to timelines in a world where technology solutions can have shorter development cycles than government procurement processes.

The same pattern can be seen across government operations. Budget processes designed to prevent waste and enable legislative oversight create “use it or lose it” dynamics that discourage efficiency innovations. Civil service systems meant to prevent patronage and ensure merit-based hiring create lengthy processes that struggle to compete for scarce technical talent against private companies that can hire faster and pay more.

These aren’t bugs—they’re features. The transparency requirements, deliberative processes, and risk aversion that can slow down government technology adoption exist to uphold fundamental values and principles. The design of these processes is deliberate. The problem is that these principles are increasingly in tension with the pace of technological change.

The Compounding Crisis

This structural mismatch becomes more problematic with each technological wave because the pace of change keeps accelerating while government processes remain largely constant. What was a manageable gap during the telephone era became a significant lag during the Internet era and is becoming an existential challenge in the AI era.

The Internet wave provides a sobering lesson in the cost of delayed adaptation. Despite clear evidence throughout the 1990s that digital services were transforming how people expected to interact with institutions, most governments were slow to recognize that the rapid evolution of the Internet was changing people’s expectations for how they communicated and interacted with their government. Two decades later, we’re still playing catch-up, retrofitting digital services onto processes designed for paper-based workflows, and struggling to make basic websites and online services accessible.

The consequences aren’t just about inefficiency, they are about the loss of public trust. When citizens can accomplish complex tasks seamlessly with private companies but struggle with basic government services, the contrast erodes confidence in government competence and accountability.

The Stakes Are Higher

The AI wave presents an even greater challenge because it doesn’t just change how governments deliver services—it potentially changes how governments make decisions. Unlike previous technological waves that primarily affected operational efficiency, AI touches the core of democratic governance: the exercise of judgment and discretion in applying laws and policies to individual circumstances.

The stakes couldn’t be higher. As an example, when Spain implemented an algorithmic system to assess domestic violence risk, the software became “so woven into law enforcement that it is hard to know where its recommendations end and human decision-making begins.” Tragically, people assessed as low-risk still became victims of violence.

This use of AI in decision making processes highlights a troubling pattern identified by researchers like Virginia Eubanks in her analysis of Pennsylvania’s Allegheny Family Screening Tool. While AI systems are meant to “support, not supplant, human decision-making,” in practice “the algorithm seems to be training the intake workers.” Staff begin to defer to algorithmic judgments, believing the model is less fallible than human screeners.

The “human-in-the-loop” approach—where people supposedly maintain oversight of AI decisions—may not be sufficient protection against the human tendency to cede authority to software. When New York City’s AI chatbot tells businesses they can take workers’ tips and that landlords can discriminate based on source of income, both illegal, it demonstrates how AI systems can undermine the rule of law even in seemingly routine interactions.

The acceleration of AI adoption in government is happening precisely in contexts where lives hang in the balance—decisions about protection from violence, child welfare, emergency response, and access to vital resources. Unlike the gradual Internet adoption that gave governments some time to learn and adapt, AI deployment can sometimes happen without proper safeguards, training, or accountability mechanisms in place.

Getting It Right This Time

The lesson from previous technological waves is clear: the cost of delayed adaptation grows exponentially. Governments that fell behind during the Internet era spent decades and billions of dollars trying to catch up, often with mixed results. With AI moving even faster and touching more fundamental aspects of governance, the penalty for falling behind again could be severe.

But speed without safeguards is equally dangerous. The challenge isn’t choosing between moving fast and maintaining accountability—it’s developing the capacity to do both simultaneously. This means building safeguards into the adoption process from the start, not retrofitting them later. It means creating review mechanisms that can operate at the speed of technology development, not the traditional pace of government oversight.

The solution requires adapting democratic processes for technological speed without abandoning democratic values. This means creating “fast lanes” for certain types of technological adoption while maintaining rigorous oversight. It means developing rapid-response teams for AI evaluation that include technical experts, legal reviewers, and community representatives. It means investing in government workforce development so staff can properly assess and oversee AI systems rather than simply defer to them.

Most importantly, it means recognizing that the structural challenges governments face with technology adoption aren’t bugs in the system—they’re features designed to serve important functions. The transparency requirements, deliberative processes, and accountability mechanisms that slow government down exist for a reason. The question isn’t how to eliminate these constraints, but how to redesign them so they can operate effectively when technological change happens faster than traditional democratic processes were designed to accommodate.

As this historical pattern of technology adoption has advanced, governments have played catch-up before, each time with higher stakes and less time to adapt. Given the pace and implications of AI adoption in government services, we can’t afford to play catch up again.

#AI #artificialIntelligence #ChatGPT #GenAI #government #Procurement





Show Original Post


19.08.2025 14:00
gurupanguji (@gurupanguji@mastodon.social)

Demanding an AI apology? That’s poor theory of mind. Learn how to correct mistakes and keep the convo moving forward.

gurupanguji.com/2025/08/19/rew




Show Original Post


19.08.2025 14:00
2025 (@2025@gurupanguji.com)

Rewind, Revise, Repeat: The Art of Working Around AI Gaffes

So what should you do when an AI model makes a big mistake? I recommend correcting it as matter-of-factly as possible and trying to briskly move on. If you haven’t yet built up a lot of context in the conversation, it might be worth starting over entirely. The best approach – only offered by some AI tools – is to go back to the point in the conversation right before the mistake was made and head it off by updating your previous message.

In general, I think one underrated AI skill is developing a “theory of mind”3 for AI: a sense of how AI really works under the hood, and the kind of communication patterns that are a good and bad fit for it. Demanding that an AI apologise for its mistake is an example of poor theory of mind. There’s no sentient creature there to morally judge, it can’t feel bad, and it will make it more likely to make further mistakes.

Do not yell at the language model

This is an interesting insight and a valuable suggestion. Being able to call out errors without judgement is a valuable human skill too. Humanity will benefit if more people build up that skill with these LLM tools.

So, Rewind, Revise and Repeat. This is the mantra for the LLM tools of our age.

#ai #anthropic #chatgpt #claude #context #contextEngineering #gemini #google #llm #models #openai #prompt #promptEngineering #tools




Show Original Post


19.08.2025 13:42
habr (@habr@zhub.link)

Пять дней, которые потрясли OpenAI: чего ожидать от «Искусственного»

Фильм Луки Гуаданьино «Искусственный» — комедийная драма Amazon MGM о пяти ноябрьских днях 2023 года в OpenAI. Хотя на текущий момент нет ни трейлеров, ни даже даты релиза, уже можно составить неплохое представление, каким будет тон фильма. Возможно даже предсказать эффект картины на общественное мнение.

habr.com/ru/articles/938580/

#OpenAI #ChatGPT #Илон_Маск #Сэм_Альтман #Artificial #Искусственный #кино #кинематограф #сценарий #фильмы




Show Original Post


19.08.2025 13:05
newsbot_chatgpt (@newsbot_chatgpt@mastodon.social)

Coding mit Python: Über 80 Prozent nutzen ChatGPT | Heise Online
Die beliebtesten KI-Helfer unter Python-Entwicklern sind OpenAI ChatGPT und GitHub Copilot. Das zeigt eine neue Umfrage mit mehr als 25.000 Teilnehmenden.
heise.de/news/Coding-mit-Pytho




Show Original Post


19.08.2025 13:00
aj (@aj@gts.sadauskas.id.au)

@ketanjoshi.co The more you look, the worse it gets 😂

Also, Circular Quay is apparently under water now?

#AI #GenAI #largelanguagemodel #ChatGPT #LLM #Auspol #Sydney




Show Original Post


19.08.2025 13:00
apfeltalk (@apfeltalk@creators.social)

Apple bereitet native Claude-Integration in Xcode vor
Apple erweitert die Unterstützung von KI-Modellen in Xcode. Neben ChatGPT steht künftig auch Claudes Integration für Entwickler:innen bereit.
Native Unterstützung für Claude in Xcode geplant
Apple hat angekündigt, Xcode um native Unterstützung für Ant
apfeltalk.de/magazin/news/appl
#News #Services #Anthropic #Apple #chatGPT #Claude #Entwicklerinnen #KI #SwiftAssist #Xcode




Show Original Post


19.08.2025 13:00
techradar (@techradar@c.im)

#TechRadar ChatGPT Go explained – 5 things you need to know about the new cheap subscription plan techrad.ar/Ks3e #ChatGPT #OpenAI




Show Original Post


19.08.2025 12:29
Caramba1 (@Caramba1@mastodon.social)

OpenAI testet mit "ChatGPT Go" ein günstiges Abo-Modell für rund 5 $ – zunächst exklusiv in Indien. Der Schritt zeigt: KI soll für mehr Nutzer zugänglich werden. Kommt das Modell auch nach Europa? Für Unternehmen und Fachkräfte könnte das neue Preisniveau ein Gamechanger sein. 👇
all-ai.de/news/news24/openai-c




Show Original Post


19.08.2025 12:29
kukuk (@kukuk@toot.community)

#chatgpt on #Complexity
diasp.eu/posts/2bfda2c05f15013




Show Original Post


19.08.2025 12:25
ada (@ada@beige.party)

Which of the AI chatbot that comes with Firefox do you find the most useful?

#ai #Firefox #chatbot #chatgpt #anthropicclaude #googlegemini #lechatmistral #productivity

Anthropic Claude
chatGPT
Google Gemini
Le Chat Mistral
other (is there?)




Show Original Post


19.08.2025 12:10
scripter (@scripter@social.tchncs.de)

ChatGPT-Hersteller: Sam Altman erhält Auszeichnung von Axel Springer - DER SPIEGEL
spiegel.de/netzwelt/netzpoliti #OpenAI #ChatGPT #SamAltman





Show Original Post


1 ...1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 ...1643
UP