Y U S E F @ M O S I A H . O R G

12th May 2026 at 9:28am

Related: sources · notes · metadata · Published Pieces

Agentic DevSecOps Is Basic Literacy

The next computer-literacy class should teach students how to run, inspect, and govern cognitive machinery.

The 21st-century computer-literacy curriculum is not “everyone should learn to code.”

That was too narrow. Coding matters, but the world has moved. AI changes the center of gravity. The basic skill is no longer only writing programs by hand. The basic skill is understanding and operating agentic infrastructure safely.

Every serious institution will need people who can do this. Schools, libraries, universities, nonprofits, clinics, local governments, small businesses, research groups, media organizations, and community institutions will all face the same question: how do we run AI systems that can read, write, search, summarize, cite, act, and remember without losing control of our data, our security, or our judgment?

That is not a narrow engineering question. It is civic infrastructure.

The curriculum should start with Linux, shells, SSH, filesystems, permissions, processes, and logs. Not because everyone needs to become a sysadmin, but because people should understand the substrate of computation. They should know what it means for a system to run somewhere. They should know what a process is, what a file is, what a port is, what a permission is, what a log is, and what it means to restart a service.

Then Git: diffs, commits, branches, merges, reviews, rollback. Git is not just a developer tool. It is one of the most important public models for trustworthy change. It teaches that artifacts can have history, that changes can be inspected, that work can branch, that mistakes can be reverted, and that collaboration needs visible state.

Then containers and microVMs. Students should understand isolation. If an agent can run commands, where does it run them? What can it access? What happens if it makes a mistake? What does it mean to give a system a sandbox? Why is approving every command not a sufficient security model? Why is it often better to let an agent run freely inside a bounded environment than to supervise it anxiously on your own machine?

Then networking: DNS, TLS, reverse proxies, firewalls, APIs, webhooks. The internet is not magic. Agentic systems are increasingly made of services talking to other services. Students should understand the risks and the shape of that conversation.

Then secrets: API keys, tokens, passwords, scopes, least privilege, rotation, vaults, audit logs. An AI agent with the wrong credential is not a helper. It is a liability.

Then models: closed APIs, open weights, local inference, routing, context windows, embeddings, retrieval, speech-to-text, text-to-speech, image models, audio models, and model limitations. Students should know that “AI” is not one thing. It is a stack of components with different costs, risks, licenses, latencies, and failure modes.

Then verifiers: tests, evals, traces, provenance checks, citation checks, human review, red-team prompts, rollback policies, and anti-Goodhart constraints. The question is not “did the model sound confident?” The question is “what evidence shows the system did the right thing?”

Then artifacts: vtexts, source bundles, transcripts, claim graphs, datasets, code, notes, summaries, generated media, and public records. Students should learn that durable work is not a chat thread. Durable work has structure, history, provenance, and ownership.

This is agentic DevSecOps. The phrase sounds specialized. The underlying literacy is general.

The old computer-literacy class taught students to use office software. The next one should teach students how to run, inspect, and govern cognitive machinery.

This changes the role of libraries and schools. A library should not merely provide access to databases and computers. It should become a local knowledge infrastructure node: running models, preserving archives, hosting vtexts, teaching provenance, helping patrons use AI without surrendering their data, and maintaining public memory.

A school should not merely ban or permit AI. It should teach students how AI systems work, how they fail, how to verify them, how to use them without becoming dependent on them, and how to contribute to shared knowledge responsibly.

This is basic literacy because the alternative is dependency. If only labs, hyperscalers, vendors, and consultants understand the infrastructure, then everyone else rents cognition from institutions they cannot inspect. That is not education. That is managed dependence.

The 21st century needs a different standard.

Students should be able to ask where the model is running, what data it can see, what tools it can call, what artifacts it changes, what logs it leaves, what sources it used, how it can be corrected, and how to recover if it fails.

That is not advanced.

That is the new baseline.