chatgpt

Back Open Paginator
11.08.2025 11:44
Mathrubhumi_English (@Mathrubhumi_English@mastodon.social)

Join the free National Maths Webinar on August 15 to learn how ChatGPT works, from graph theory to Markov models. Class by Prof M Ram Murty english.mathrubhumi.com/educat





Show Original Post


11.08.2025 11:39
2025 (@2025@rbfirehose.com)

Ars Technica: After using ChatGPT, man swaps his salt for sodium bromide—and suffers psychosis. “After seeking advice on health topics from ChatGPT, a 60-year-old man who had a ‘history of studying nutrition in college’ decided to try a health experiment: He would eliminate all chlorine from his diet, which for him meant eliminating even table salt (sodium chloride). His ChatGPT conversations […]

https://rbfirehose.com/2025/08/11/ars-technica-after-using-chatgpt-man-swaps-his-salt-for-sodium-bromide-and-suffers-psychosis/




Show Original Post


11.08.2025 11:35
macst3r (@macst3r@mastodon.social)

-Modell von : in aufgedeckt

golem.de/news/ki-modell-von-op




Show Original Post


11.08.2025 11:31
chotemysl (@chotemysl@nrw.social)

„Zudem kritisierten Anwender, die plötzliche Abstinenz eines vertauten Gesprächspartners. Für Nutzer […] sei der Wegfall des alten GPT-4o-Modells nicht nur spürbar gewesen, sondern hätte sogar Verlustgefühle und Trauer provoziert.“

ifun.de/nach-nutzerprotesten-o

WAS? Ein Gesprächspartner? Verlustgefühle? Das ist ein Haufen Blech, der sehr schnell Wörter kombiniert! Wie kaputt muss man sein, sowas als Mensch anzusehen? #ChatGpt #ki #llm




Show Original Post


11.08.2025 10:55
aiandemily (@aiandemily@mastodon.social)

生成AI・ChatGPTの基本的な注意点9選【入門編】

aiandemily.com/%e7%94%9f%e6%88





Show Original Post


11.08.2025 10:35
newsbot_chatgpt (@newsbot_chatgpt@mastodon.social)

heise+ | MCP: So erledigen KI-Sprachmodelle Ihre Aufgaben | Heise Online
Durch das Model Context Protocol lassen ChatGPT & Co. ihren Worten Taten folgen und erledigen automatisiert fast beliebige Aufgaben für Sie.
heise.de/ratgeber/MCP-So-erled




Show Original Post


11.08.2025 10:27
tugatech (@tugatech@masto.pt)

Veja como trazer o GPT-4o de volta ao ChatGPT
🔗 tugatech.com.pt/t70436-veja-co

#ChatGPT #chave #OpenAI 




Show Original Post


11.08.2025 10:19
2025 (@2025@markcarrigan.net)

Why generative AI guidance for students needs to be embedded in departments

I just read the Russell Group AI principles for the first time since they were released and was struck by principle number 2: “Staff should be equipped to support students to use generative AI tools effectively and appropriately in their learning experience“. This is exactly what I’ve been blogging about recently as the point where the sector is struggling to adapt to the diffusion of LLMs which has already happened within the student community. As the guidance itself acknowledges what it means to use LLMs “effectively and appropriately in their learning experience” will vary between disciplines:

The appropriate uses of generative AI tools are likely to differ between academic disciplines and will be informed by policies and guidance from subject associations, therefore universities will encourage academic departments to apply institution-wide policies within their own context. Universities will also be encouraged to consider how these tools might be applied appropriately for different student groups or those with specific learning needs.

Unfortunately this places a great burden on subject associations at a point where many of them are still grappling with the financial difficulties generated by the pandemic, with declining membership rates, increasing costs and at least some event income having been knocked out temporarily. It also assumes that subject associations would have capacity to do this beyond resources. It might be possible for associations with dynamic leadership and a strong base of academic members working on these issues. But even that it’s asking a lot and most do not have this baseline level of resource. Where they do engage it’s likely to be subject to institutional isomorphism, replicating the assumptions of other groups because no one is clear about what this all means yet and is worried about being seen to misstep.

Subject associations were never going to be able to provide this guidance with sufficient depth and contextual sensitivity. This seems so obvious to me that it’s hard not to read the Russell Group principles as an (unconscious?) passing of responsibility for a difficult task to an external agent. Because the final statement under principle two illustrates what is needed in order to address this:

Engagement and dialogue between academic staff and students will be important to establish a shared understanding of the appropriate use of generative AI tools. Ensuring this dialogue is regular and ongoing will be vital given the pace at which generative AI is evolving.

I see no possible way around this. This dialogue has to take place, be embedded in existing processes and involve safe spaces in which staff and students feel able to talk frankly about their perceptions. It has to be informed by university policy but not subordinated to it. It has to continue for as long as the landscape of Generative AI is changing. It has to be lightweight enough to get buy in from a sufficient number of staff when workloads are spiralling amidst a general sense of crisis. It has to be robust enough to actually have some hope of generating norms and standards concerning what “effective and appropriate” use of LLMs means in their context.

The Russell Group principles describe the problem as if it’s the solution. This is not a straightforward undertaking, as evidenced perhaps by the lack of evidence that it’s taking place in the sector. Saying ‘dialogue is important’ necessitates that we think about what that infrastructure for dialogue can and should look like. In practice there’s a range of questions this addresses:

What’s actually happening on the ground?

What are students in our discipline using AI for? Which specific tools at what points in their work? How does this differ from what we imagine is happening?

What makes our discipline what it is?

Which capabilities and ways of thinking are foundational to what we do? What has to remain human for this to still be our field? Where might AI genuinely enhance rather than undermine these capabilities?

When does support become substitution?

At what point does AI use shift from supporting learning to bypassing it? How do we recognize genuine engagement versus its simulation? What’s the difference between scaffolding and outsourcing?

How do we assess in an AI-saturated world?

What forms of assessment still tell us something meaningful? How do we evaluate understanding when outputs can be generated? What new approaches might we need to develop?

Who gets left behind?

Which students have access to what tools? How does the wealth gap manifest in AI capability? What would meaningful support look like?

What’s the disconnect with professional practice?

How is AI actually used in our field outside universities? What happens when we prohibit tools that are standard in the workplace? How do we prepare students for reality?

How do we build collective capacity?

What do staff need to feel less anxious about this? What helps students use AI thoughtfully rather than desperately? How do we learn from what’s working and what isn’t?

#AI #AIPrinicples #artificialIntelligence #ChatGPT #education #generativeAI #largeLanguageModels #russellGroup #students #technology




Show Original Post


11.08.2025 10:18
LizzyNet (@LizzyNet@nrw.social)

Übrigens: KI kann schlecht Sudoku.
#Studie: #ChatGPT & Co. tun sich mit dem beliebten Logikrätsel schwer, vor allem, wenn sie ihre Lösungen begründen sollen.
lizzynet.de/ki-kann-schlecht-s





Show Original Post


11.08.2025 09:42
habr (@habr@zhub.link)

AI и QA: убьёт ли ChatGPT профессию тестировщика?

«ChatGPT убьёт тестировщиков» — миф или реальность? Рассказываю, как AI уже влияет на сферу QA и почему инженеры не останутся без работы.

habr.com/ru/articles/932394/

#тестирование #qa #qa_automation #quality_assurance #chatgpt #автоматизация_тестирования #искусственный_интеллект #машинное+обучение #карьера_qa #ai




Show Original Post


11.08.2025 09:37
TomsITCafe (@TomsITCafe@mastodon.social)

AI isn't an enemy. A companion. It doesn't replace you. Enhances you. Learn to use it - or it will use you.

tomsitcafe.com/2025/08/11/ai-p




Show Original Post


11.08.2025 09:23
kiq (@kiq@fedibird.com)

それでもおれは無課金。GeminiもGPT-4o miniもあるし “ただし、無料ユーザーには5時間間隔で10メッセージのGPT-5アクセスという大幅な制限が付いている。” / “なぜGPT-5で大混乱?無料は最新、有料は旧モデル選択の謎仕様に #エキスパートトピ(神田敏晶) - エキスパート - Yahoo!ニュース” (1 user) news.yahoo.co.jp/expert/articl #chatgpt




Show Original Post


1 ...1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 ...1641
UP