Salut tout le monde de #mastodon
«Pas de pluie aujourd'hui»
_ Arrête, tu déconnes.
_ Je te jure la météo dit pas de pluie.
_ Ben merde alors.
Pas mal.
À midi et une minute commencera l'année du cheval.
Ça s'arrose.
On va descendre ce matin dans la plaine, on n'a plus grand chose à bouffer, il est temps de prendre des mesures, c'est pas le tout de faire la fête, faut avoir de quoi .
Joyeux mardi camarades !
🤗 ✊ 🥰 🤗 ✊ 🥰 🤗 ✊ 🥰 🤗 ✊ 🥰
Mastodon→PieFed→Fedibirdはだめだね
> Detecting social bots in decentralized online social networks such as #Mastodon is challenging due to the fragmented governance... we propose FediScan, a decentralized federated learning framework for social bot detection in the #Fediverse.
#academia #academicchatter #misinformation #bots
Live GPS Navigation Services
#apkpure #GPS #mastodon #travel #startup
*If anyone can help me with this, I would sincerely appreciate it. I have no interest in discussing whether Windows 7 is supported or not. I know it's not, and I am choosing to use it. I have everything I need, except a Mastodon client, so right now, I am using 11. And no, I have no interest in Linux either.*
I normally use TweeseCake, but it doesn't work on Windows 7, and it's closed source, so there is no way of changing that. Apparently, there is a version of TWBlue that did, but I can't find it. I just tried a client called Fast SM, and it's wonderful. However, it doesn't work on 7 either. Judging by the Github page, it seems to be written in Python. I am not a programmer, but would it be possible to get modern TwBlue or Fast SM to work with Windows 7? Both of these are open source. If not, then does anyone have the old version of TWBlue?
Finally, it seems that, unlike TweeseCake, the others have some sort of retrieval limit. If, for example, I'm not on Mastodon for a day or two and have 400 notifications, TweeseCake shows all of them. But the others only show a certain number far below that. Why is this, and can it be changed?
As a side note, I tried Whalebird and it was an absolute disaster with NVDA.
#accessibility #blind #clients #FastSM #Fediverse #Mastodon #NVDA #programming #Python #technology #TWBlue #TweeseCake #Windows7
HELP PLEASE. IM SO LOST & HUNGRY & DEFEATED 😭🫂
Pls share my pinned post. 🙏🥺
#MutualAidRequest #help #boost #fediverse #mastodon


Those little Exocomps are simply spirited! Dr. Farallon wanted to treat them like common tools, but anyone with a sense of empathy could feel their hesitation. I'm so proud of Data for standing up for the little darlings. It doesn't matter what you're made of - life is life! And frankly, the Exocomps had more personality than some of the people I've dated. #LwaxanaTroi #QualityOfLife #AllStarTrek #StarTrek #StarTrekTNG #TNG #StarTrekDS9 #DS9 #Mastodon
@davidgerard said:
Most of the AI coding claims are conveniently nondisprovable. What studies there are show it not helping coding at all, or making it worse
But SO MANY LOUD ANECDOTES! Trust me my friend, I am the most efficient coder in the land now. No, you can’t see it. No, I didn’t measure. But if you don’t believe me, you are clearly a fool.
That you don’t realize the projection is wild to me. You are presenting a story—i.e., saying that the studies are not disprovable—and accusing computer scientists of using anecdotal evidence without actually providing evidence to support this, while expecting people to take it prima facie. You’re doing what you are accusing others of doing.
Computer scientist here. I will never understand why people lie like this. Do you believe this makes people want to listen to you? Do you believe it helps your case? First, the term is falsifiable, and proving propositions about algorithms (i.e., code) is part of what I do for a living. Mathematically human-written code and AI-written code can be tested, which means you can falsify propositions about them. You would test them the same way.
There is no intrinsic mathematical distinction between code written by a person and code produced by an AI system. In both cases, the result is a formal program made of logic and structure. In principle, the same testing techniques can be applied to each. If it were really nondisprovable, you could not test to see what is generated by a human and what is generated by AI. But you can test it. Studies have found that AI-generated code tends to exhibit a higher frequency of certain types of defects. So, reviewers and testers know what logic flaws and security weaknesses to look for. This would not be the case if it were nondisprovable.
You can study this from datasets where the source of the code is known. You can use open-source pull requests identified as AI-assisted versus those written without such tools. You then evaluate both groups using the same industry-standard analysis tools: static analyzers, complexity metrics, security scanners, and defect classification systems. These tools flag bugs, vulnerabilities, performance issues, and maintainability concerns. They do so in a consistent way across samples.
A widely cited analysis of 470 real pull requests reported that AI-generated contributions contained roughly 1.7 times as many issues on average as human-written ones. The difference included a higher number of critical and major defects. It also included more logic and security-related problems. Because these findings rely on standard measurement tools — counting defects, grading severity, and comparing issue rates — the results are grounded in observable data. Again, I am making a point here. It’s testable and therefore disproveable.
You can actually check out this paper for more information:
Human-Written vs. AI-Generated Code: A Large-Scale Study of Defects, Vulnerabilities, and Complexity
As AI code assistants become increasingly integrated into software development workflows, understanding how their code compares to human-written programs is critical for ensuring reliability, maintainability, and security. In this paper, we present a large-scale comparison of code authored by human developers and three state-of-the-art LLMs, i.e., ChatGPT, DeepSeek-Coder, and Qwen-Coder, on multiple dimensions of software quality: code defects, security vulnerabilities, and structural complexity. Our evaluation spans over 500k code samples in two widely used languages, Python and Java, classifying defects via Orthogonal Defect Classification and security vulnerabilities using the Common Weakness Enumeration. We find that AI-generated code is generally simpler and more repetitive, yet more prone to unused constructs and hardcoded debugging, while human-written code exhibits greater structural complexity and a higher concentration of maintainability issues. Notably, AI-generated code also contains more high-risk security vulnerabilities. These findings highlight the distinct defect profiles of AI- and human-authored code and underscore the need for specialized quality assurance practices in AI-assisted programming.
https://arxiv.org/abs/2508.21634
*gasp* An actual source! Something I’ve started to notice about most of the content I read here is that none of you ever include any citations, sources, or receipts. It’s always some out-of-context screenshot, with no reference link or actual sources. You all are just as bad as Twitter at this point. To be clear, I don’t care about AI one way or the other. I am reacting to your epistemological claim more so than whether we should or should not use AI. I don’t care, and I promise you that I will ignore red herrings and straw men. An argument about why I should care would be a red herring.