The new role of the philosophy major
Back when I was going to school, we all thought it would be cool to major in philosophy, but we were told it was a professional dead end. At best, the degree was a respectable pre-law play, but beyond that, only good for stand-up comedy or teaching philosophy to other vocation-averse souls. Times change, however, and in the age of job obsolescence and robots, the misunderstood philosophy major may be having the last laugh.
For instance, the rise of artificial intelligence has produced a new and somewhat unexpected figure within some of the world’s most technologically advanced companies: the philosopher. At Anthropic and other AI developers, individuals like Amanda Askell have been brought into the fold not to write code but to help decide how that code should behave. Askell, who earned her undergrad in Philosophy at Oxford and holds a PhD from New York University, has played a role in shaping the ethical framework behind AI systems—helping define what constitutes harm, where boundaries should be drawn and how machines ought to respond when confronted with morally ambiguous situations.
The reason for the shift is not difficult to discern. Artificial intelligence systems increasingly operate in spaces once reserved for human judgment. They answer questions, summarize information and in many cases, influence decisions. When a machine declines to answer, flags a claim as misleading or attempts to balance competing viewpoints, it is—at least in appearance—making a moral choice. Someone must decide how those choices are made.
At the same time, the regulatory environment is evolving. Governments, both in the United States and abroad, have begun to signal that AI will not remain an unregulated frontier indefinitely. Companies are preparing for a future in which they must demonstrate not only technical competence but ethical responsibility. That preparation requires people capable of translating broad principles—fairness, accountability, transparency—into operational guidelines. Philosophers are well-suited to that task.
Public trust, too, has become a central concern. The early years of social media offered a cautionary tale: innovation without guardrails can produce unintended consequences at scale. AI companies appear eager to avoid repeating that history. By incorporating ethical review into the development process, they hope to anticipate problems before they become public controversies.
Beyond those headline examples lies a broader, quieter trend. A growing number of positions now fall under the umbrella of “AI ethics” or “responsible AI.” These roles do not always carry the title of philosopher, but they draw heavily on the same intellectual traditions. They include trust and safety specialists, policy analysts and researchers tasked with examining bias, misinformation and the societal impact of automated systems. Many come from interdisciplinary backgrounds, combining philosophy with law, political science or even computer science itself.
In practice, the work is less about abstract theorizing than about applied judgment. It involves reviewing model outputs, drafting internal guidelines and advising engineers on questions that resist easy answers. The philosopher in this context is not standing apart from the machine, but sitting alongside its builders, attempting to shape its behavior in real time.
Still, it would be a mistake to overstate the trend. Engineers continue to dominate hiring, and most companies are not suddenly filling their ranks with humanities majors. Even within ethics teams, there is a strong preference for hybrid expertise—individuals who can move comfortably between conceptual reasoning and technical understanding.
The presence of philosophers in Silicon Valley does not signal a revolution in hiring so much as an acknowledgment of complexity. As machines take on more human-like functions, the questions they raise begin to resemble the ones humans have wrestled with for centuries.
