Visualize your Python data structure with just one click.
Linked List: https://memory-graph.com/#codeurl=https://raw.githubusercontent.com/bterwijn/memory_graph/refs/heads/main/src/linked_list.py×tep=0.2&play
#Python #memory_graph #DataStructure

TerraformでAWS bedrockを使用したLINE botを作成する
https://qiita.com/ikuoikuo/items/9812654517306142ddb3?utm_campaign=popular_items&utm_medium=feed&utm_source=popular_items
#qiita #Python #AWS #Terraform #bedrock
🍳 データ分析を自動化!Copilotと作る業務効率化プロトタイプ
https://qiita.com/tmng3y3/items/6fa3a0447f0a44992bb2?utm_campaign=popular_items&utm_medium=feed&utm_source=popular_items
#qiita #Python #業務効率化 #copilot #生成AI #Excel自動化
@zeitverschreib I'm not a blogger, but a #python developper...
I'll say use #python, as it is **MUCH** simpler to perform such a basic tasks, like text formartting / YAML extraction / formatting...
Change? OMG!
For a while now, I’ve been thinking about moving my blog from Hugo & PaperMod to BSSG.
Both are Static Site Generators, both use Markdown, both are Open Source. But as far as I can tell, the frontmatter of the documents differs between the two systems. This would mean that I’d have to at least check each and every md file by hand before moving it from Hugo to BSSG.
The main content of my blog, i. e. the posts and fixed pages which I wrote by hand, would not be an issue. Currently about 50 files to review and adjust. But I have also moved all my posts from Instagram and Pixelfed over, using a quick’n’dirty Visual Basic script to convert exported HTML into separate files. Hundreds of posts containing one or more images, a few hashtags and maybe a short comment.
I could rewrite that script to accommodate BSSG, but what about the next switch to a new platform a few years down the road?
How do you, fellow bloggers out there, handle this problem?
Should I create all my posts in some kind of basic format and write a translator script to create the final md file with the correct frontmatter? Should I learn Python or Rust and convert the current Hugo style files to BSSG input? And speaking for programming languages: which one should I learn, Python or Rust?
So many questions. :-)
#bssg #blogging #hugo #python #rust
Want to learn about Python packaging, how to play Ukulele, or how to get from coding to leadership? #PyCampCZ is happening in just 3 weeks. It's a barcamp-style unconference about all things #Python in the middle of Czechia.
👉 Check it out: https://pycamp.cz/
How to profile code with #Python using the built-in cProfile module in the REPL

Jeśli ktoś chciałby dowiedzieć się więcej o programowaniu w Pythonie, to na YouTubie dostępne jest nagranie szkolenia pt. "Arkana Pythona: sztuczki, kruczki, sekrety", które ostatnio prowadził @gynvael
https://www.youtube.com/watch?v=y9jIN8c5_ZA
After 15 years and hundreds of #DDI projects, I built a repo of small helpers that saved me time: simple scripts and utilities. I’m no coder, just a script tinkerer who prefers pragmatic fixes over frameworks. It’s a living toolbox. Some rough, most battle-tested, many improvable. Use, fork, improve, and tell me what to fix.
https://github.com/ataudte/ddi-helpers
#DNS #DHCP #IPAM #github #Python #Shell #PowerShell

⭐️🛠️ Workshop spotlight! 🛠️⭐️
💻 TDD: what it is, why it's good, and why it might just solve all your AI problems by @hjwp
🛠️⭐️ Find out more here: https://pretalx.com/pyconuk-2025/talk/GW3Z8Y/
🎟️ Grab your ticket! https://2025.pyconuk.org/tickets/

I want to export a scikit iris cube to parquet and I first convert the cube to a pandas dataframe. However there's that cftime field that can't be serialized to parquet. Converting the df to datetime is slow. Is there any way to get the parquet file fast? There are only a few timestamps but because of the gridded nature of the cube, in df they explode to many rows.