litecli is a command-line client for SQLite databases that has auto-completion and syntax highlighting.
The post litecli – command-line client for SQLite databases appeared first on LinuxLinks.
/// 17 Nov 2025, 6:08 pm ////// 9to5Linux ///

Git 2.52 open-source distributed version control system is now available for download with numerous new features and improvements. Here's what's new!
The post Git 2.52 Introduces New Command for Grabbing Various Repository Characteristics appeared first on 9to5Linux - do not reproduce this article without permission. This RSS feed is intended for readers, not scrapers.
/// 17 Nov 2025, 4:46 pm ////// Phoronix ///
/// 17 Nov 2025, 12:34 pm ////// Slashdot ///
Read more of this story at Slashdot.
/// 17 Nov 2025, 5:56 pm ////// GamingOnLinux ///
.
Read the full article on GamingOnLinux.
/// 17 Nov 2025, 6:06 pm ////// Google News ///
Running AI on public data is easy. The hard part is moving sensitive data and valuable models into production without risking leakage while operating at the scale modern GPU clusters demand. That was the blunt message from NVIDIA’s Zvonko Kaiser at the OpenInfra Summit Europe 2025, where he outlined how NVIDIA is using Kata Containers and the CNCF Confidential Containers stack to deliver “trusted AI anywhere”: on-prem, in private clouds, across public CSPs, and out to the edge.
“The real challenge is running AI pipelines on confidential data and protecting model IP,” Kaiser said, noting that for many enterprises, that trust gap is why “66% of enterprises leave >50% of private data unused.”
Below is a concise walkthrough of the problem space, the architecture NVIDIA is advancing with Kata Containers, and what it means for teams building secure AI on Kubernetes.
The trust problem (and why 2025 is different)
Kaiser framed the landscape as three pillars of security for AI:
- Cryptographic compute (e.g., HE, MPC, ZKP): powerful but often orders of magnitude slower for deep learning.
- Software sandboxes (e.g., gVisor, Firecracker) reduce the blast radius but still assume trust in the host.
- Trusted Execution Environments (TEEs): hardware-backed isolation that flips the model: the workload doesn’t trust the infrastructure.
The inflection point: modern CPU TEEs (AMD SEV-SNP, Intel TDX) now combine with GPU-level protections (Hopper and newer), and Kubernetes plumbing has matured. That alignment makes it practical to enforce confidentiality and integrity without rewriting your AI code.
“Scale is spelled GPU,” Kaiser reminded the audience. “Enterprises care that you can run pipelines across hundreds of nodes and thousands of GPUs.”
Why Kata Containers?
Containers are a great packaging and delivery mechanism, but they don’t provide a strong isolation boundary on their own. Kata Containers adds a lightweight VM boundary around each container, giving you:
- Stronger isolation: guest kernel and userspace are independent from the host; host changes are far less likely to break your workload stack.
- OCI and Kubernetes compatibility: Kata integrates cleanly with containerd/CRI-O and Kubernetes primitives (e.g., RuntimeClass), so you can keep your workflows.
- A glide path to Confidential Containers: the same mechanics that make Kata useful for multi-tenant isolation also power Confidential Containers (Kata + guest components + attestation), where the VM memory is encrypted and measured.
Kaiser emphasized this “no surprises” posture: NVIDIA’s enablement patterns for bare-metal GPUs are replicated within the Kata guest, so the software experience is consistent across bare-metal, Kata, and Confidential Containers.
Kubernetes-native, lift-and-shift security
NVIDIA’s stack builds on familiar Kubernetes constructs:
- RuntimeClass to select between bare metal, Kata, or Confidential Containers per pod.
- DRA (Dynamic Resource Allocation) for fine-grained, policy-driven device assignment.
- CDI (Container Device Interface) to surface GPUs into containers/Kata VMs with the right binaries, libraries, and device nodes.
- NVIDIA GPU Operator to automate the cluster-level pieces (driver lifecycle, GPU feature discovery, networking, storage hooks).
- Peer-pods to support hybrid cloud scenarios, bursting Confidential Containers to CSPs while keeping isolation boundaries intact.
- “Rustifying” the stack to reduce memory-safety issues across critical components.
The result is a Kubernetes-native path: annotate your pods, choose your RuntimeClass, and let the stack handle device plumbing, NUMA/topology awareness, and attestation. As Kaiser put it, it’s lift-and-shift for your AI pipelines, not just individual containers.
Getting GPUs right inside VMs
When you put GPUs behind a VM boundary, topology matters. P2P transfers, GPUDirect RDMA, and NUMA constraints all care about PCIe placement and capabilities (ACS/ATS, switch hierarchies, etc.). NVIDIA addressed this with two complementary approaches in Kata:
- Topology flattening when you don’t need strict host mirroring.
- Host topology replication when you do, so drivers see the “right” layout and enable the fast paths automatically.
CDI metadata helps map which NIC belongs to which GPU for P2P and RDMA. Kata also supports PF/VF pass-through and lets you choose per-pod PCIe topology (e.g., one workload uses MIG, another uses time-sliced VFs, another uses GPUDirect RDMA). These are pragmatic features born from real customers pushing real scale.
From Hopper to Blackwell and toward TDISP
On the hardware side, NVIDIA started with Hopper (single-GPU pass-through for confidential compute) and is extending with Blackwell, which adds multi-GPU pass-through and scales out across multi-node jobs. Performance improves further with TDISP, which encrypts and integrity-protects traffic on the PCIe itself, reducing overhead compared to bounce buffers. The message: hardware is ready; now the race is software, standards, and ops.
Attestation, secrets, and the “data clean room”
Kaiser underscored that attestation isn’t just for CPUs anymore. GPU state must be part of the measured trust chain, and NVIDIA is working with the community on composite attestation across CPU, GPU, NIC/DPU, and storage. Once a workload proves it’s in the expected state, key release can unlock encrypted model weights, datasets, and storage volumes.
That unlocks new multi-party trust models. Imagine a data clean room where a data owner, model owner, and infrastructure provider each receive verifiable assurances, and where the client can confidently execute sensitive AI workloads, knowing that every layer —from silicon to container and service —is attested and verified before any data or keys are exposed.
What this means for you
If you’re running AI on Kubernetes and you care about protecting model IP, complying with data regulations, or just not trusting the infrastructure by default, here’s why Kata + Confidential Containers should be on your shortlist:
- Familiar UX: keep your container images and Kubernetes workflows; select a RuntimeClass and go.
- Operational consistency: NVIDIA’s GPU Operator and CDI make bare metal, Kata, and CC feel the same to your pipelines.
- Scale with safety: VM isolation for noisy/malicious neighbors; confidential VMs for encrypted and attested execution.
- Performance-aware: topology replication and per-pod PCIe decisions preserve the fast paths GPUs need.
“It’s the same image, the same attestation, different postcodes,” Kaiser said. “Run it anywhere.”
How to get started
- Test your current workloads with a Kata RuntimeClass on a small node pool. Validate that the GPU paths and drivers behave as expected.
- Turn on attestation with Confidential Containers for sensitive pipelines. Wire it to a key broker so secrets are only released to measured states.
- Adopt DRA and CDI to control device assignment and expose the right GPU/NIC topology per job.
- Engage upstream: NVIDIA, Kata, and the Confidential Containers community are actively collaborating on topology, attestation, and reference architectures. Bring your use cases and performance traces.
Kaiser’s call to action was simple: test, contribute, deploy. If your teams have been holding back high-value data because the trust wasn’t there, this is your opportunity to close that gap without rewriting your AI stack.
“With confidential compute, the workload doesn’t trust the infrastructure,” he said. “Kata and Confidential Containers make that model practical at GPU scale.”
—
Interested in sharing your results or learning more about NVIDIA’s reference architectures with Kata? Join the Kata Containers Slack and the CNCF Confidential Containers discussions. Your feedback directly shapes what ships next.
The post NVIDIA + Kata Containers: Trusted AI at GPU Scale appeared first on Superuser.
/// 17 Nov 2025, 6:10 pm ////// Tux Machines ///
/// 12 Nov 2025, 12:00 am ////// RedHat ///
/// 17 Nov 2025, 12:00 am ////// Blog on AlmaLinux ///
AlmaLinux OS 9.7 Stable Now Available
Hello Community! The AlmaLinux OS Foundation is announcing the general availability of AlmaLinux OS 9.7 codenamed “Moss Jungle Cat”!
Installation ISOs are available on the mirrors now for all 4 architectures:
Torrents are available as well at:
ISOs, Live Images, Cloud and Containers
AlmaLinux also offers a variety of Cloud, Container and Live Images. The builds for these get kicked off as soon as the public repository is ready.
/// 17 Nov 2025, 4:53 pm ////// The Hacker News ///
/// 17 Nov 2025, 8:14 am ////// Reddit ///
[link] [comments]

I rely heavily on GNOME extensions for my daily workflow. From Dash to Dock for quick app launching to Tiling Shell to effortlessly manage app windows while working. These basically turn the vanilla GNOME experience into something that truly fits my needs.
While browsing through the latest This Week in GNOME post, I stumbled upon something interesting. A developer announced Veil, describing it as a cleaner and more modern way than Hide Items to manage applets in the GNOME panel.
It sounded promising. So I decided to take it for a spin and see what it brings to the table.
Veil: Overview ⭐

Veil comes from Dagim G. Astatkie, a software professional based out of Ethiopia. This extension addresses a common frustration among GNOME users. If you are a power user, then your top panel can quickly fill up with system indicators and status icons.
It gets messy fast, and Veil gives you control over what stays visible and what gets hidden away.
It offers many handy features, like auto-hide items on timer, slick animations when showing or hiding items, and the ability to selectively choose which panel icons stay visible.
Initial Impressions 👨💻
I installed it using Extension Manager on a Ubuntu 25.10 system, and I found it straightforward from start to finish. First, I enabled a few other extensions to properly test how Veil handles multiple panel items. Once that was done, everything clicked into place.
A single click on the sharp-looking arrow at the top right of the panel did the trick. My network stats indicator disappeared. The Tiling Shell layout switcher vanished. System Monitor went away too. A clean top panel, just like that.


Veil's General and Panel Items page.
If I wanted to tweak things further, I could easily do so by heading into the "General" tab of the extension settings. There I got to play around with options like save state, default visibility, changing the arrow icon to something else for open and close actions, configuring auto-hide timing, and deciding which items stay visible at all times.
This level of freedom should be enough for most people who want a clean top panel and some peace of mind.
📥 Get Veil
If you already have GNOME extensions set up on your system, installation is straightforward. Visit the extensions website or open Extension Manager and search for "Veil" by author "JD".
If you haven't configured extensions yet, our complete guide on GNOME shell extensions will walk you through the entire setup process. The source code for Veil lives on GitHub for those interested in contributing or building from source.
Suggested Read 📖

We did it again, Fedora at Kirinyaga university in Kenya. This time, we didn’t just introduce what open source is – we showed students how to participate and actually contribute in real time.
Many students had heard of open source before, but were not sure how to get started or where they could fit. We did it hands-on and began with a simple explanation of what open source is: people around the world working together to create tools, share knowledge, and support each other. Fedora is one of these communities. It is open, friendly, and built by different people with different skills.
We talked about the many ways someone can contribute, even without deep technical experience. Documentation, writing guides, design work, translation, testing software, and helping new contributors are all important roles in Fedora. Students learned that open source is not only for “experts.” It is also for learners. It is a place to grow.
Hands-on Documentation Workshop

After the introduction, we moved into a hands-on workshop. We opened Fedora Docs and explored how documentation is structured. Students learned how to find issues, read contribution instructions, and make changes step-by-step. We walked together through:
- Opening or choosing an issue to work on
- Editing documentation files
- Making a pull request (PR)
- Writing a clear contribution message
By the end of the workshop, students had created actual contributions that went to the Fedora project. This moment was important. It showed them that contributing is not something you wait to do “someday.” You can do it today.
“This weekend’s Open Source Event with Fedora, hosted by the Computer Society Of Kirinyaga, was truly inspiring! 
Through the guidance of Cornelius Emase, I was able to make my first pull request to the Fedora Project Docs – my first ever contribution to the open-source world.
”
– Student at Kirinyaga University
Thank you note
Huge appreciation to:
- Jona Azizaj — for steady guidance and mentorship.
- Mat H. — for backing the vision of regional community building.
- Fedora Mindshare Team — for supporting community growth here in Kenya.
- Computer Society of Kirinyaga — for hosting and bringing real energy into the room.
And to everyone who played a part – even if your name isn’t listed here, I see you. You made this possible.
Growing the next generation
The students showed interest, curiosity, and energy. Many asked how they can continue contributing and how to connect with the wider Fedora community. I guided them to Fedora Docs, Matrix community chat rooms, and how they can be part of the Fedora local meetups here in Kenya.
We are introducing open source step-by-step in Kenya. There is a new generation of students who want to be part of global technology work. They want to learn, collaborate, and build. Our role is to open the door and walk together(I have a discourse post on this, you’re welcome to add your views).

What Comes Next
This event is part of a growing movement to strengthen Fedora’s presence in Kenya. More events will follow so that learning and contributing can continue.
We believe that open source becomes strong when more people are included. Fedora is a place where students in Kenya can learn, grow, share, and contribute to something global.
We already had a Discourse thread running for this event – from the first announcement, planning, and budget proposal, all the way to the final workshop. Everything happened in the open. Students who attended have already shared reflections there, and anyone who wants to keep contributing or stay connected can join the conversation.
You can check the events photos submitted here on Google photos(sorry that’s not FOSS:))
Cornelius Emase,
Your Friend in Open Source(Open Source Freedom Fighter)
Review: Zorin OS 18
News: NetBSD experiments with sandboxing, postmarketOS unifies its documentation, OpenBSD makes system upgrades more resilient, Canonical offers 15 years of support for Ubuntu, Debian publishes updated media for Trixie
Questions and answers: Deleting a file with a weird name
Released last week:....
/// 12 Nov 2025, 7:31 am ////// Tecmint ///
Gone are the days when Skype was the go-to VoIP tool for every chat, call, or meeting. While Skype once
The post How to Install Microsoft Teams, Slack, and Discord on Linux Desktop first appeared on Tecmint: Linux Howtos, Tutorials & Guides.