Software giant Palantir Technologies faces intense internal dissent as employees challenge the company's deepening ties to controversial U.S. government operations, particularly within immigration enforcement. Concerns grew significantly during President Donald Trump’s second term, prompting staff to question the firm’s foundational commitment to civil liberties, according to current and former workers interviewed by WIRED. This internal reckoning escalated dramatically after federal agents killed a nurse during anti-ICE protests in Minneapolis.
The internal unrest at Palantir did not emerge suddenly; it simmered for months before boiling over, fueled by a series of events that chipped away at employee trust. A former Palantir employee, speaking to WIRED, recounted a telling phone conversation with a colleague last fall, where the greeting itself was a stark question: "Are you tracking Palantir's descent into fascism?" This sentiment, described as "feeling wrong" rather than merely "unpopular and hard," captures the deepening unease among staff. In the fall of President Donald Trump’s second term, Palantir’s software became the technological backbone of his administration’s immigration enforcement machinery.
The company provided tools to identify, track, and deport immigrants for the Department of Homeland Security. This direct involvement with a policy many considered harsh began to erode internal consensus. Employees started raising internal alarms, and their discomfort grew into a broader challenge for the firm.
The tension reached a critical point in January, following the violent killing of Alex Pretti, a nurse shot by federal agents during protests against Immigration and Customs Enforcement (ICE) in Minneapolis. This incident ignited a firestorm within Palantir’s internal Slack channels. Workers from across the company flooded a dedicated thread, demanding clarity and accountability from management, including CEO Alex Karp, regarding Palantir’s relationship with ICE. "Our involvement with ice has been internally swept under the rug under Trump2 too much," one person wrote in a Slack message, as reported by WIRED. "We need an understanding of our involvement here," they insisted.
The urgency was clear. Management’s response only deepened the frustration. Around that time, Palantir began altering its Slack data retention policies in #palantir-in-the-news.
Messages were automatically wiped after seven days. This decision, unannounced to staff, drew suspicion. A member of Palantir’s cybersecurity team stated the change was a direct reaction to leaks.
The company did release an updated wiki defending its Homeland Security contracts, asserting the technology "is making a difference in mitigating risks while enabling targeted outcomes."
Then came another blow: the deadly February 28 missile strike on an Iranian elementary school. This occurred on the first full day of the Trump administration and Israel’s war in Iran. More than 120 children were killed when a Tomahawk missile struck the school.
Investigations concluded the United States was responsible. Palantir’s Maven system had reportedly been used during that day’s strikes. For a company already reeling over its ICE work, possible involvement in the deaths of children became a breaking point. "I guess the root of what I’m asking is … were we involved, and are doing anything to stop a repeat if we were," one employee asked in the Palantir news Slack channel.
The internal friction continued into March. CEO Alex Karp gave an interview to CNBC, claiming AI could undermine "humanities-trained—largely Democratic—voters" and increase the power of working-class male voters. These statements drew internal criticism.
This week, Palantir leadership further incensed workers with a Saturday afternoon manifesto. It condensed Karp’s book, "The Technological Republic," into 22 points, suggesting the U.S. consider reinstating the draft. Critics labeled it "fascist." This alarmed some workers.
The direct questions from employees during "Ask Me Anything" (AMA) forums revealed the depth of the internal struggle. These sessions featured figures like Chief Technology Officer Shyam Sankar and members of the privacy and civil liberties (PCL) teams. Some efforts were even unsanctioned.
One notable AMA was organized independently by two team leads, one of whom had worked directly on the ICE contract. "This was very rogue," a PCL employee who had worked on the ICE contract said in a February AMA, a recording of which WIRED obtained. "Courtney [Bowman, head of the privacy and civil liberties team] doesn’t know that I’m spending three hours this week talking to IMPLs [Palantir terminology for its client-facing product teams], but I think this is the only real way to start going in the right direction." The admission revealed internal fractures. During this lengthy call, employees posed hard questions. Could ICE agents delete audit logs within Palantir’s software?
Could agents create harmful workflows independently? What was the most malicious outcome possible? The PCL employee conceded that "a sufficiently malicious customer is, like, basically impossible to prevent at the moment." Control, they explained, relied on "auditing to prove what happened" and legal action after a contract breach.
This offered little comfort. A current employee tried to level with the group, explaining Palantir’s work with ICE was a priority for CEO Alex Karp, unlikely to change soon. "Karp really wants to do this and continuously wants this," they stated. "We’re largely at the role of trying to give him suggestions and trying to redirect him, but it was largely unsuccessful." This suggested a clear directive from the top. Karp himself later sat for a prerecorded interview with Bowman, but refused to address ICE contracts directly.
He proposed employees sign nondisclosure agreements for details. Transparency remained elusive. The Palantir spokesperson, when asked about military contracts, stated the company was "proud" to support the U.S. military "across Democratic and Republican administrations." This did not address the specific incident in Iran.
Following Karp's CNBC comments on AI and voters, one worker asked on Slack, "Is it true that AI disruption is going to disproportionately negatively affect women and people who vote Democrat? and if it is, why are we cool with that?" These questions went unanswered publicly. The manifesto also sparked outrage. "I’m curious why this had to be posted," one frustrated employee wrote. "It’s like we taped a ‘kick me’ sign on our own backs," another summarized. Palantir was founded in the shadow of the September 11, 2001, attacks, with an initial venture capital investment from the CIA.
The prevailing national consensus at the time viewed fighting terrorism abroad as the most vital mission facing the United States. The company, co-founded by tech billionaire Peter Thiel, developed software designed as a high-powered data aggregation and analysis tool. Its applications ranged from supporting private businesses to powering the U.S. military’s targeting systems.
For two decades, employees largely accepted the intense external criticism and awkward conversations with family and friends about working for a company named after J. R. Tolkien’s corrupting, all-seeing orb.
They believed in the mission. The broad story Palantir told itself, and its employees, was that it emerged from 9/11 with a specific concern: the push for safety might infringe on civil liberties. "We were worried that that safety might infringe on civil liberties," one former employee told WIRED. "And now the threat’s coming from within." This sentiment captures a deep identity crisis. "We were supposed to be the ones who were preventing a lot of these abuses. Now we’re not preventing them.
We seem to be enabling them." The company's original narrative has fractured. Its employees now grapple with a stark realization. Palantir has always cultivated a secretive reputation, strictly forbidding employees from speaking to the press and requiring alumni to sign non-disparagement agreements.
Historically, management at least appeared open to internal engagement and criticism, according to multiple employees. Over the last year, however, much of that feedback has been met with what some describe as philosophical soliloquies and redirection. "It’s never been really that people are afraid of speaking up against Karp," one current employee told WIRED. "It’s more a question of what it would do, if anything." The culture shifted. This dissent, marked by shame and uncertainty, has appeared in internal channels whenever Palantir has been in the news over the last year. "I think the only thing not different is a lot of folks are still incredibly wary about leaks and talking to the press," one current employee told WIRED, describing the evolving internal company culture.
Despite this, Karp recently told workers that the company is "behind the curve internally" when it comes to popularity. He has been consistent in this view; in March 2024, Karp told a CNBC reporter that "if you have a position that does not cost you ever to lose an employee, it’s not a position." For many employees, however, this culture shift feels intentional. "Maybe it’s gotten to a place where encouraging independent thought and questioning leads to some bad conclusions," one former worker told WIRED. This internal upheaval at Palantir extends far beyond the walls of a single tech company; it spotlights a critical intersection where powerful technology meets government policy, directly impacting human lives and civil liberties.
For many families, particularly those in immigrant communities, Palantir's software represents a tangible threat, not an abstract data tool. The policy says one thing – "mitigating risks while enabling targeted outcomes," as Palantir management stated in their wiki – but the reality for families caught in immigration enforcement can mean separation, detention, and deportation. What this actually means for your family might be the difference between staying together or being torn apart by a system made more efficient by these very tools.
The company’s original mission, framed around preventing abuses post-9/11, now appears to many employees to be enabling them. This raises fundamental questions about the ethical responsibilities of technology companies and their developers when their products are used by government agencies in ways that workers find morally objectionable. The implications reach into national security, the future of artificial intelligence, and the balance between state power and individual rights, both domestically and across borders.
When a company’s internal discourse shifts from addressing concerns to wiping conversations, it suggests a chilling effect on open dialogue. This impacts not only the company’s culture but also the public’s trust in the technology itself. Both sides claim victory in this fight for the future of tech, but the numbers of impacted families tell a story of real human cost. - Palantir employees are increasingly questioning the company's civil liberties commitments due to its contracts with U.S. immigration enforcement and military operations. - Incidents like the killing of Alex Pretti and the Iran missile strike have intensified internal dissent, leading to direct challenges to CEO Alex Karp and management. - Management responses, including wiping Slack messages and philosophical defenses, have further frustrated workers who feel their concerns are being dismissed. - The debate highlights a broader tension within the tech industry regarding the ethical deployment of powerful data analysis tools by government agencies.
Employees will continue to watch for concrete changes in company policy or a shift in leadership’s stance, rather than just philosophical discussions. The broader conversation around tech ethics and government contracts will likely intensify, pushing other companies to scrutinize their own partnerships. Regulatory bodies and civil liberties groups will undoubtedly continue to press for greater transparency and accountability from firms like Palantir, ensuring the debate moves beyond internal Slack channels and into the public sphere.
Future government contracts, particularly those with immigration or defense agencies, will be scrutinized more closely by both the public and by potential employees weighing their ethical commitments. The choices made by Palantir’s leadership in the coming months will set a precedent for how tech companies navigate the complex moral landscape of national security work.
Key Takeaways
— - Palantir employees are increasingly questioning the company's civil liberties commitments due to its contracts with U.S. immigration enforcement and military operations.
— - Incidents like the killing of Alex Pretti and the Iran missile strike have intensified internal dissent, leading to direct challenges to CEO Alex Karp and management.
— - Management responses, including wiping Slack messages and philosophical defenses, have further frustrated workers who feel their concerns are being dismissed.
— - The debate highlights a broader tension within the tech industry regarding the ethical deployment of powerful data analysis tools by government agencies.
Source: Ars Technica









