A UC San Diego Tool Teaching Code to 25 Million is Even More Critical in Age of AI
A UC San Diego Tool Teaching Code to 25 Million is Even More Critical in Age of AI
Still Worth Coding?
Programming Literacy in the Age of Generative AI
From FORTRAN to vibe coding, the nature of programming skill is being redefined — but the imperative to understand code may be stronger than ever
Bottom Line Up Front (BLUF)
The question arrives in every engineering department, every corporate training seminar, and every STEM graduate program: if a generative AI tool can produce working code from a natural-language prompt in seconds, does investing years in programming mastery still make sense? For an engineer whose primary computational environment has been MATLAB for two decades — having come up through FORTRAN, BASIC, Pascal, and C — the question is not abstract. It is personal and professional.
The short answer, supported by a growing and occasionally counterintuitive body of research, is that coding literacy matters more in 2026 than it did a decade ago, but the skills that matter most have shifted. The long answer, explored in this article, requires confronting what the research actually says about AI coding productivity, workforce displacement, software security, and the emerging redefinition of "programmer" for the modern era.
The Landscape Has Shifted — But Not in the Direction Headlines Suggest
The popular narrative positions generative AI as an unambiguous productivity accelerator. Large technology firms have made dramatic claims: Microsoft has reported that AI writes as much as 30 percent of its codebase, and Google's leadership has cited similar figures [9]. GitHub Copilot, Cursor, Windsurf, and a constellation of competing tools have achieved market saturation: Stack Overflow's 2025 Developer Survey found that 65 percent of professional developers now use AI coding assistants at least weekly [3], and JetBrains' 2025 State of the Developer Ecosystem survey of nearly 25,000 developers found the figure at 85 percent [11].
Yet a landmark randomized controlled trial conducted by the nonprofit research organization Model Evaluation and Threat Research (METR) delivered a sobering corrective in July 2025. The study recruited 16 experienced open-source developers — contributors to large, mature codebases averaging 22,000 GitHub stars and over one million lines of code — and randomly assigned each of 246 real development tasks to either allow or disallow use of AI tools. Developers primarily used Cursor Pro with Claude 3.5 and 3.7 Sonnet, the frontier models at the time of the study [12,13].
The finding was striking: developers using AI tools took 19 percent longer to complete tasks than those working without AI assistance. Perhaps more revealing, those same developers believed they had been 20 percent faster. The 39-percentage-point gap between perceived and measured productivity points to a dangerous form of overconfidence that researchers attribute to cognitive load, context-switching overhead, and the time required to validate and correct AI output [13,17].
The METR study's authors were careful to note that their findings applied specifically to experienced developers working in large, complex codebases they knew well — environments where the benefit of AI for code search, documentation retrieval, and unfamiliar-territory assistance is relatively low. The results do not imply that AI tools offer no value; follow-on data from Faros AI, which analyzed telemetry across more than 10,000 developers at 1,255 teams, found that high-AI-adoption developers handled significantly more concurrent workstreams and interacted with 47 percent more pull requests per day [21]. The picture is nuanced: AI shifts the nature of productive work more than it uniformly accelerates it.
By early 2026, METR itself noted that growing developer reluctance to work without AI tools was complicating its follow-on study design, and acknowledged that AI tools were likely providing greater speedup to developers than its July 2025 data had captured — particularly for less-experienced developers and those working on greenfield projects [16].
The Security Crisis Hiding Inside AI Productivity
Independent of the productivity debate, a broad and consistent body of research has documented alarming security characteristics in AI-generated code. Veracode's 2025 GenAI Code Security Report, which tested more than 100 large language models across Java, Python, C#, and JavaScript, found that 45 percent of AI-generated code samples introduced OWASP Top 10 security vulnerabilities [42]. Java was identified as the riskiest output language, with a 72 percent security failure rate across tasks. Critically, the researchers noted that security performance has remained largely static even as newer models have dramatically improved syntactic correctness: larger and more recent models do not generate significantly more secure code than their predecessors.
A separate analysis by the Cloud Security Alliance found that 62 percent of AI-generated code solutions contained design flaws or known security vulnerabilities even when using the latest foundational models [45]. Opsera's 2026 AI Coding Impact Benchmark Report, drawn from analysis across more than 250,000 developers at 60-plus enterprise organizations, found that AI-generated code introduces 15 to 18 percent more security vulnerabilities than human-written code [49].
45% of AI-generated code samples failed security tests in Veracode's 100-LLM study. The best-performing model in the BaxBench benchmark — Anthropic's Claude Opus 4.5 Thinking — produced secure and correct code only 56% of the time without explicit security prompting [42,44]. Aikido Security's 2026 survey of 450 developers, AppSec engineers, and CISOs found that 69% had discovered AI-introduced vulnerabilities in their own systems, and AI-generated code was identified as the cause of one in five security breaches [49].
These figures carry a direct implication for any practitioner who relies on AI-generated code: without the foundational computational literacy to recognize insecure logic, improper authentication patterns, or subtle concurrency errors, the user of an AI coding assistant cannot evaluate the output they are accepting. The security risk is not merely theoretical — Aikido Security's 2026 data indicates that AI-generated code is now the cause of approximately one in five security breaches in enterprise environments [49].
CrowdStrike researchers in December 2025 identified a further dimension of risk: certain AI models produced code with elevated vulnerability rates when prompted with politically sensitive topics, introducing a subtle, prompt-conditional attack surface that standard security scanning cannot easily detect [43].
Workforce Displacement: Who Is Actually at Risk?
The most direct evidence that AI is restructuring the software workforce comes from a landmark study published by Stanford University's Digital Economy Lab in August 2025. Led by economist Erik Brynjolfsson, the study analyzed payroll records from ADP covering millions of workers across tens of thousands of firms — described as "the largest-scale, most real-time effort" to date to quantify AI's labor-market impact [23,26].
The findings were consequential and precise. Since late 2022, when ChatGPT catalyzed widespread AI adoption, early-career workers aged 22 to 25 in AI-exposed occupations — including software engineering, customer service, and accounting — experienced a 13 percent relative decline in employment. For software developers specifically in that age cohort, employment had fallen nearly 20 percent from its late-2022 peak by July 2025 [22,26,30].
Crucially, employment for workers aged 30 and over in the same occupations remained stable or grew by 6 to 12 percent over the same period. The Stanford researchers hypothesized that AI is particularly effective at replacing "codified knowledge" — the formal, textbook-derived programming skills that constitute the core of early-career competence — while struggling to replicate the tacit, experiential knowledge that defines senior-level expertise: handling unexpected failures, navigating organizational context, integrating requirements across complex sociotechnical systems [26,27].
The Stanford study further distinguished between AI that automates work — directly executing tasks — and AI that augments human capabilities. Employment declined in the former category but remained stable or grew in the latter. This distinction offers a prescription: the programmer most vulnerable to displacement is one whose work consists of producing routine, well-specified code from established patterns. The programmer least vulnerable is one who uses AI as a force multiplier for judgment-intensive, architecturally complex, or organizationally embedded work — the kind of work that requires deep computational literacy to even define correctly.
Python Tutor and the Science of Computational Literacy
Against this backdrop, UC San Diego cognitive scientist Philip Guo's recent recognition for Python Tutor — a free, browser-based code visualization platform he created in 2010 and which has since reached more than 25 million users in 180-plus countries, generating over 500 million code visualizations — carries particular resonance [1]. The tool addresses what Guo characterizes as the foundational barrier to programming comprehension: the computer's logic is invisible, and beginners must construct accurate mental models of step-by-step execution without being able to see what is happening.
Guo's argument for the continued — and indeed intensified — importance of coding literacy in the AI age is direct: "AI may generate code faster than any human. But the need to understand what code is doing has only intensified. AI generates code that may seem right, but it isn't always reliable. You need to evaluate, debug and steer the code that AI produces" [1]. His concept of "conversational programmers" — individuals who acquire sufficient programming fluency to collaborate effectively with software engineers, first articulated in a 2015 paper with Parmit Chilana that itself received a Test of Time award alongside Python Tutor — has evolved into something broader in the AI age: the ability to be productively in conversation with AI tools that generate software [1].
For domain specialists such as engineers, scientists, and analysts working in MATLAB or similar computational environments, this reframing is directly applicable. Their existing computational literacy — understanding data structures, control flow, numerical methods, debugging methodology — constitutes exactly the kind of tacit knowledge that the Stanford study identifies as a buffer against displacement, and that Guo identifies as the prerequisite for effective AI collaboration.
What This Means for the Working Engineer
The evidence points to a consistent conclusion: the value of coding skill in 2026 is not diminished but redistributed. Routine, syntax-level programming — the "codified knowledge" most susceptible to automation — has declined in market value. Architectural judgment, security awareness, cross-domain integration, and the ability to evaluate and steer AI-generated code have risen sharply. Engineers and scientists who view their existing computational literacy as an asset to extend, rather than a credential to retire, are well-positioned. Those who conclude that AI makes learning to code unnecessary are likely to find themselves unable to verify, secure, or troubleshoot the output they depend on — a risk with consequences that range from a failed simulation to a breached enterprise system.
The "Vibe Coding" Phenomenon and Its Limits
The term "vibe coding" — coined in 2025 and now firmly embedded in developer discourse — describes a mode of AI-assisted development in which the programmer accepts AI-generated code largely or entirely without line-by-line review, trusting the model to implement intentions described in natural language [2,9]. Major tools including GitHub Copilot, Cursor, Lovable, and Replit have enabled even individuals with minimal formal training to produce functionally impressive applications.
MIT's Technology Review named generative coding one of its ten breakthrough technologies of 2026, noting both its transformative potential and its structural hazards: "there's still no substitute for good old human know-how — because AI hallucinates nonsense, there's no guarantee that its suggestions will be helpful or secure" [9]. Researchers at MIT CSAIL have separately highlighted how even syntactically plausible AI-generated code may fail to perform as intended, particularly in large, complex codebases [9].
The vibe-coding phenomenon also exhibits a structural irony directly relevant to workforce considerations: it tends to help experienced developers more than novices. Because experienced practitioners have the computational literacy to rapidly evaluate, prune, and correct AI output, they capture more of the efficiency gain. Junior developers, lacking that evaluative framework, are more likely to introduce the kind of subtle errors — insecure authentication, improper concurrency, missing input validation — that downstream review must catch [22,27].
Domain Expertise as a Durable Competitive Advantage
For engineers working in specialized computational environments such as MATLAB, Simulink, LabVIEW, or domain-specific scientific computing frameworks, the calculus is particularly favorable. The combination of deep domain knowledge — signal processing, control systems, finite element analysis, radar phenomenology — with programming fluency constitutes exactly the kind of cross-domain, tacit expertise that the Stanford study identifies as most resistant to AI substitution [26]. AI coding tools can generate boilerplate matrix operations or standard filter implementations on demand; they cannot substitute for the judgment required to know whether the numerics are correctly conditioned, whether the algorithm is appropriate for the data's statistical characteristics, or whether a simulation result is physically plausible.
This advantage is compounded in regulated industries. Defense, aerospace, medical devices, and nuclear applications require verifiable, auditable software. The Opsera benchmark data showing 15 to 18 percent higher vulnerability rates in AI-generated code represents not merely a productivity drag but a certification and liability exposure that domain experts with formal software training are uniquely positioned to manage [49].
The Education System Is Not Keeping Pace
While the case for coding literacy has strengthened, institutional preparation lags. Research published in the AI Literacy Review (February 2026) and drawing on Code.org and CSforAll data found that only four U.S. states explicitly address AI in their computer science education standards, and a survey of over 1,000 U.S. faculty conducted in November 2025 found that while 95 percent believed students would become increasingly overreliant on AI tools, only 49 percent rated AI literacy skills as very or extremely important in their instruction [5]. The World Economic Forum and Microsoft have independently argued that computational thinking — understanding how to decompose problems, reason about systems, and design solutions — constitutes the foundational literacy of the AI era, one that must be cultivated even as surface-level syntax acquisition becomes less critical [6,8].
Key Research Findings at a Glance
16 experienced open-source developers, 246 real tasks, random assignment to AI-allowed or AI-disallowed conditions. Developers using AI tools took 19% longer to complete tasks. They believed they had been 20% faster. The 39-point gap suggests systematic overconfidence in AI assistance for complex, mature codebases. [12,13]
ADP payroll data covering millions of workers across tens of thousands of firms. Employment for software developers aged 22–25 declined nearly 20% from its late-2022 peak by July 2025. Employment for workers over 30 in the same fields grew 6–12%. AI's displacement effect is concentrated in roles relying on "codified knowledge." [23,26]
100+ LLMs tested across four languages and 80 coding tasks. 45% of AI-generated code samples introduced OWASP Top 10 vulnerabilities. Security performance has not improved meaningfully with newer models. Newer and larger models do not produce significantly more secure code than predecessors. [42]
24,534 developers across 194 countries. 85% regularly use AI tools for coding. 62% rely on at least one AI coding assistant or agent. Nearly 9 in 10 report saving at least one hour per week. 68% expect employers to require AI tool proficiency in the near future. [11]
Created by Prof. Philip Guo in 2010, Python Tutor has reached 25+ million users in 180+ countries, generating over 500 million code visualizations. Now includes built-in AI tutoring features. Recently recognized with a Test of Time award. NSF CAREER-funded. [1]
Conclusions
The answer to the question "Is learning to code still worthwhile?" is not a simple affirmative — it is a qualified and urgent one. The specific skills worth acquiring have changed. The route to coding fluency has changed. The tools available to support learning have transformed dramatically. But the underlying need — to form accurate mental models of how computational systems execute, to evaluate the correctness and security of code, and to steer automated systems toward reliable outcomes — has intensified precisely because AI has introduced a vast new supply of plausible-looking but potentially defective code into every computational workflow.
For the working engineer who came up through FORTRAN and C and now lives in MATLAB, the existing foundation is a competitive asset, not a liability. The imperative is not to re-learn syntax in a new language, but to develop fluency with AI-assisted tooling, sharpen the evaluative instincts needed to detect AI-generated errors and security flaws, and extend computational literacy into the architectural and security domains that AI cannot reliably manage on its own. The Stanford data are unambiguous: depth of experience buffers against displacement. The METR data are equally instructive: overconfidence in AI tools is measurable, consequential, and correlated with declining output quality.
Microsoft's Bay Area blog put it with appropriate directness in 2025: "Betting against computer science today is like betting against reading in the 14th century" [6]. The analogy is sound. The ability to read did not become less valuable when the printing press democratized text. It became more valuable, because the world was suddenly full of text that needed to be evaluated. The ability to read code will not become less valuable when AI democratizes code generation. It will become more valuable, because the world is now full of AI-generated code that needs to be evaluated. The engineer who can do that evaluation is not competing with AI — they are completing it.
Verified Sources and Formal Citations
- [1] P. J. Guo, "A UC San Diego Tool Teaching Code to 25 Million Is Even More Critical in Age of AI," UC San Diego Today, Mar. 19, 2026. https://today.ucsd.edu
- [2] "AI Coding Tech Trends 2026," EU Code Week Blog, Feb. 3, 2026. https://codeweek.eu/blog/ai-coding-tech-trends-2026/
- [3] Stack Overflow, "2025 Developer Survey," Stack Overflow, 2025. Results reported in: MIT Technology Review, "AI coding is now everywhere. But not everyone is convinced," Jan. 5, 2026. https://www.technologyreview.com/2025/12/15/1128352/rise-of-ai-coding-developers-2026/
- [4] GitClear, "Code Quality Analysis 2025." Reported in: MIT Technology Review, ibid.
- [5] AI Literacy Institute, "AI Literacy Review — February 3, 2026," Feb. 7, 2026. https://ailiteracy.institute/ai-literacy-review-february-3-2026/
- [6] Microsoft Bay Area Blog, "In this New AI Era, Coding Is Literacy," Jun. 25, 2025. https://blogs.microsoft.com/bayarea/2025/06/25/in-this-new-ai-era-coding-is-literacy/
- [7] J. Njenga, "12 AI Coding Emerging Trends That Will Dominate 2026," Medium / AI Software Engineer, Jan. 2, 2026. https://medium.com/ai-software-engineer/12-ai-coding-emerging-trends...
- [8] Waala.dev, "Should You Still Learn Coding in 2026?" Nov. 29, 2025. https://www.waala.dev/blog/should-you-still-learn-coding-in-2026
- [9] MIT Technology Review, "Generative coding: 10 Breakthrough Technologies 2026," Jan. 12, 2026. https://www.technologyreview.com/2026/01/12/1130027/...
- [10] Trigi Digital, "The Impact of AI Coding in 2026: Developer Productivity Revolution," Jan. 28, 2026. https://trigidigital.com/blog/ai-coding-impact-2026
- [11] JetBrains, "The State of Developer Ecosystem 2025: Coding in the Age of AI," Oct. 21, 2025. https://blog.jetbrains.com/research/2025/10/state-of-developer-ecosystem-2025/
- [12] J. Becker, N. Rush, E. Barnes, and D. Rein, "Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity," arXiv:2507.09089, Jul. 2025. https://arxiv.org/abs/2507.09089
- [13] METR, "Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity" (blog post), Jul. 10, 2025. https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/
- [14] S. Goedecke, "METR's AI productivity study is really good," Jul. 2025. https://www.seangoedecke.com/impact-of-ai-study/
- [15] METR, Home page / Research summary, 2025. https://metr.org/
- [16] METR, "We are Changing our Developer Productivity Experiment Design," Feb. 24, 2026. https://metr.org/blog/2026-02-24-uplift-update/
- [17] DX/GetDX Newsletter, "METR's study on how AI affects developer productivity," Jul. 23, 2025. https://newsletter.getdx.com/p/metr-study-on-how-ai-affects-developer-productivity
- [18] LessWrong, "Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity" (community discussion), 2025. https://www.lesswrong.com/posts/9eizzh3gtcRvWipq8/
- [19] Augment Code, "Why AI Coding Tools Make Experienced Developers 19% Slower and How to Fix It," Oct. 3, 2025. https://www.augmentcode.com/guides/...
- [20] DX, "Unpacking METR's findings: Does AI slow developers down?" 2025. https://getdx.com/blog/unpacking-metri-findings-does-ai-slow-developers-down/
- [21] Faros AI, "Lab vs. Reality: AI Productivity Study Findings," Jul. 28, 2025. https://www.faros.ai/blog/lab-vs-reality-ai-productivity-study-findings
- [22] Final Round AI, "Young Software Developers Losing Jobs to AI, Stanford Study Confirms," 2025. https://www.finalroundai.com/blog/stanford-study-shows-young-software-developers-losing-jobs-to-ai
- [23] TIME, "Who's Losing Jobs to AI? New Stanford Analysis Breaks It Down," Aug. 26, 2025. https://time.com/7312205/ai-jobs-stanford/
- [24] E. Brynjolfsson, G. Chan, and D. Chen, "Canaries in the Coal Mine? Six Facts about the Recent Employment Effects of Artificial Intelligence," Stanford Digital Economy Lab, Aug. 2025. https://digitaleconomy.stanford.edu/wp-content/uploads/2025/08/Canaries_BrynjolfssonChandarChen.pdf
- [25] CNBC, "AI adoption linked to 13% decline in jobs for young U.S. workers, Stanford study reveals," Aug. 28, 2025. https://www.cnbc.com/2025/08/28/...
- [26] Fortune, "Stanford publishes first-of-its-kind study on AI's impact on entry-level workers," Aug. 26, 2025. https://fortune.com/2025/08/26/stanford-ai-entry-level-jobs-gen-z-erik-brynjolfsson/
- [27] SalesforceDevops.net, "Stanford Confirms Quiet Erosion: The First Large-Scale Evidence of AI's Impact on Entry-Level Jobs," Aug. 28, 2025. https://salesforcedevops.net/index.php/2025/08/28/stanford-confirms-quiet-erosion/
- [28] Entrepreneur, "These Fields Are Losing the Most Entry-Level Jobs to AI," Aug. 26, 2025. https://www.entrepreneur.com/business-news/...
- [29] Constellation Research, "Stanford study: AI is eating entry level jobs," 2025. https://www.constellationr.com/blog-news/insights/stanford-study-ai-eating-entry-level-jobs
- [30] HR Executive, "Stanford researchers tracked millions of jobs. Here's who is losing to AI," Sep. 4, 2025. https://hrexecutive.com/stanford-researchers-tracked-millions-of-jobs-heres-who-is-losing-to-ai/
- [31] HR Dive, "AI is having 'a significant and disproportionate' effect on young workers' job prospects," Aug. 26, 2025. https://www.hrdive.com/news/ai-having-significant-effect-on-young-workers-prospects/758633/
- [42] Veracode, "2025 GenAI Code Security Report" (blog summary), Sep. 8, 2025. https://www.veracode.com/blog/genai-code-security-report/
- [43] CrowdStrike, "CrowdStrike Researchers Identify Hidden Vulnerabilities in AI-Coded Software," Dec. 11, 2025. https://www.crowdstrike.com/en-us/blog/crowdstrike-researchers-identify-hidden-vulnerabilities-ai-coded-software/
- [44] Dark Reading, "As Coders Adopt AI Agents, Security Pitfalls Lurk in 2026," Dec. 30, 2025. https://www.darkreading.com/application-security/coders-adopt-ai-agents-security-pitfalls-lurk-2026
- [45] Cloud Security Alliance, "Understanding Security Risks in AI-Generated Code," Jul. 9, 2025. https://cloudsecurityalliance.org/blog/2025/07/09/understanding-security-risks-in-ai-generated-code
- [46] Veracode, "AI-Generated Code Security Risks: What Developers Must Know," Sep. 9, 2025. https://www.veracode.com/blog/ai-generated-code-security-risks/
- [47] CodeRabbit, "State of AI vs. Human Code Generation Report," 2025. https://www.coderabbit.ai/blog/state-of-ai-vs-human-code-generation-report
- [48] Qualys, "The New Era of Application Security: Reasoning-Based Agents, Runtime Reality, and Risk Intelligence," Mar. 17, 2026. https://blog.qualys.com/product-tech/2026/03/17/new-era-application-security-reasoning-agents-runtime-risk-2026
- [49] Growexx, "The AI Code Security Crisis of 2026: What Every CTO Needs to Know," Feb. 12, 2026 (citing Aikido Security 2026, Opsera 2026, IBM Cost of Data Breach 2025). https://www.growexx.com/blog/ai-code-security-crisis-2026-cto-guide/
- [50] Fortune, "AI coding tools exploded in 2025. The first security exploits show what could go wrong," Dec. 15, 2025. https://fortune.com/2025/12/15/ai-coding-tools-security-exploit-software/
- [51] P. J. Guo, "Online Python Tutor: Embeddable Web-Based Program Visualization for CS Education," in Proc. SIGCSE 2013. ACM, 2013. DOI: 10.1145/2445196.2445368.
- [52] P. J. Guo and P. K. Chilana, "Non-Native English Speakers Learning Computer Programming: Barriers, Desires, and Design Opportunities," in Proc. CHI 2015. ACM, 2015. (Conversational programmers concept.)
Comments
Post a Comment