Linx Blog

All posts

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
We Just Raised Series B. Here's What it Means for the Future of IGA.
Company News

We Just Raised Series B. Here's What it Means for the Future of IGA.

Mar 31, 2026

We're at one of those rare moments where an entire software category gets rewritten from scratch. Not improved. Replaced. AI isn't making identity governance faster - it's making the old architecture obsolete.

When Niv and I started Linx two years ago, we made a bet: that the identity governance category was overdue for a fundamental rethink, and that AI-native architecture - not AI bolted onto legacy infrastructure - would be what made that possible. That the future of IGA wasn't periodic reviews and manual workflows. It was continuous, autonomous, and built for a world where humans, machines, and AI agents all coexist inside the same enterprise.

Today, I'm proud to announce that Linx Security has raised a $50M Series B, led by Insight Partners, with continued support from Cyberstarts and Index Ventures - bringing our total funding to $83 million. And alongside this round, we've launched Linx Autopilot: the industry's first AI agent purpose-built for Identity Governance and Administration.

This isn't just a funding milestone. It's a signal that the IGA category is at an inflection point - and that Linx is leading it.

Why Now

The identity landscape has been transformed by three forces converging at once.

First, AI agents are proliferating inside every enterprise - not as experiments, but as active participants in business workflows. They hold credentials. They access sensitive systems. They act with autonomy. And almost none of today's governance frameworks were built to manage them.

Second, the attack surface has exploded. One breach, one over-privileged service account, one dormant credential - and the damage can be catastrophic. Boards know it. CISOs feel it daily. The compliance frameworks are finally catching up.

Third - and this is what excites me most - the technology is finally ready. AI-native architecture makes it possible to do in seconds what traditional tools take weeks to accomplish: detect, evaluate risk in context, and act. Not reactively. Continuously.

IGA was always treated as a necessary evil. A compliance checkbox. Something you suffered through. We built Linx on the premise that it doesn't have to be that way.

What We're Building - and Why It Matters Now

The enterprise of 2026 doesn't look like the enterprise IGA was designed for. AI agents are being provisioned inside every workflow. Non-human identities now outnumber human ones. The attack surface isn't growing linearly - it's multiplying. And the governance frameworks built for a world of on-prem directories and annual access reviews were simply never designed for this reality.

Linx is built AI-native from the ground up - not AI layered onto legacy architecture. That distinction matters more than it might sound. It's what allows us to move from periodic, reactive governance to something fundamentally different: continuous, autonomous identity security that operates at the speed of the business and the speed of the threat.

Think of it as having a security operator working 24/7 on your behalf - one that monitors every identity in your environment, detects risk in context as it emerges, and acts before the damage is done. When a privileged account behaves unexpectedly, it responds. When an AI agent is provisioned with excessive permissions, it sees it. When an employee moves roles and leaves ghost access behind, it remediates - before an attacker finds it first.

Security teams don't lose control. They gain leverage. The tedious, repetitive work gets handled autonomously. The decisions that require human judgment get escalated. That's what modern identity governance looks like - and that's what we're delivering.

To the People Who Made This Possible

None of this happens without the people.

To Niv - twenty years of shared history, and I still learn something from you every week. Building this company alongside you has been one of the great privileges of my career. You push this product to places I wouldn't have imagined.

To Sarit - your technical vision and relentless standards are woven into every line of this platform. What you've built with the engineering team is something we'll be proud of for a long time.

To our entire Linx team - 100 people who bet on a vision and made it real. Every customer win, every product breakthrough, every late night - that's us, together. I'm incredibly proud of what we've built as a team.

To Teddie, Elan and the Insight Partners team - your belief in where this market is going gave us a true partner for the next chapter. And to Gili at Cyberstarts, and Shardul at Index Ventures - you've been with us from the beginning, and your conviction in this vision has never wavered. We don't get here without all of you.

And to our customers - the security leaders and identity practitioners who chose to build with us early, challenged us to be better, and trusted us with what matters most. You are the reason we do this. Your trust is the highest validation we know.

What Comes Next

The market isn't just ready, it's asking for it. Every security leader we talk to, every enterprise scrambling to govern AI agents they provisioned last quarter with no visibility into what they can access, confirms what we believed two years ago: this category was overdue, and the moment is now.

What comes next is simple to say and hard to execute: we scale. We're growing the team, accelerating the Autopilot roadmap, and going deeper with the enterprises already trusting us to govern millions of identities in production.

The IGA category is being rewritten. The window to define what the next generation looks like is open.

We intend to define it.

- Israel Duanis, CEO & Co-Founder, Linx Security

Company News

Linx Lands $50 Million from Wiz's Earliest Investors to Fix Identity Security in the AI Era

Mar 31, 2026
Linx Security Raises $85M
Company News

Linx Security Raises $50M Series B as Identity Becomes Security’s Biggest Failure Point

Mar 31, 2026

NEW YORK, March 31, 2026 - Linx Security, a pioneer in modern identity security and governance solutions, today announced a $50 million Series B financing round led by global software investor Insight Partners, with continued participation from Cyberstarts and Index Ventures. This brings Linx’s total funding to $83 million. The 100-person startup has already signed multimillion-dollar contracts with banks, healthcare companies, and Fortune 500 firms, governing millions of identities globally.

As enterprises adopt cloud, automation, and AI, the number of identities inside organizations has exploded, now spanning not just employees, but machines, services, and AI agents, which outnumber humans by roughly 80 to 1. Traditional identity governance tools, built for a smaller and more static environment, have struggled to keep up, leaving security teams with limited visibility, slower response times, and expanding risk at a time when nearly 90% of security incidents involve identity-related failures.

Founded in 2023 by cybersecurity veterans Israel Duanis and Niv Goldenberg, the company provides an AI-native platform that continuously maps, monitors, and governs all identities across the enterprise, human, non-human, and agents alike. By replacing manual processes and periodic reviews with real-time detection and automated remediation, Linx enables organizations to reduce identity risk without slowing down the business.

“Identity governance has shifted from a back-office compliance function to a core pillar of enterprise security,” said Israel Duanis, CEO and co-founder of Linx Security. “This funding allows us to scale faster and meet the growing demand from organizations that need real-time visibility and control over every kind of identity operating in their environment.”

Linx recently introduced Linx Autopilot, the first autonomous AI agent designed to fundamentally change how identity governance is managed. Moving away from the constraints of manual oversight and reactive processes, Autopilot continuously monitors identity activity, detects meaningful changes in real time, and takes action, either resolving issues automatically or escalating when needed. By operating across human, machine, and agents, it enables security teams to move from periodic control to continuous, intelligent enforcement, without adding operational overhead.

The new funding will support Linx’s next phase of growth, including expanding its global footprint, scaling enterprise go-to-market efforts, and accelerating product development around autonomous identity governance.

"Linx is reimagining IGA architecture to tackle the emerging problem of agent governance. The company’s AI-first approach, along with the introduction of Linx Autopilot, well positions Linx in this critical category and we're thrilled to partner on this journey,” said Teddie Wardi, Managing Director at Insight Partners.

"We backed Linx at inception because we believe identity would become the core control layer of modern security,” said Gili Raanan, Founding Partner at Cyberstarts. “AI agents are rapidly expanding the number of identities operating inside organizations, turning identity governance from a back-office compliance task into a board-level risk. Linx is building the platform to govern that new reality.”

About Linx Security

Linx Security is the AI-native identity security and governance platform built for the era of AI agents and non-human identities. Founded in 2023 and headquartered in New York, the company delivers unified visibility, continuous risk detection, and autonomous remediation across every identity in the enterprise - human, non-human, and AI. Backed by Insight Partners, Index Ventures, and Cyberstarts, Linx Security is trusted by identity-intensive enterprises globally to eliminate identity risk without slowing the business. For more information, visit www.linx.security.

About Insight Partners

Insight Partners is a global software investor partnering with high-growth technology, software, and Internet startup and Scale-up companies that are driving transformative change in their industries. As of June 30, 2025, the firm has over $90B in regulatory assets under management. Insight Partners has invested in more than 875 companies worldwide and has seen over 55 portfolio companies achieve an IPO. Headquartered in New York City, Insight has a global presence with leadership in London, Tel Aviv, and the Bay Area. Insight's mission is to find, fund, and work successfully with visionary executives, providing them with tailored, hands-on software expertise along their growth journey, from their first investment to IPO. For more information on Insight and all its investments, visit insightpartners.com or follow us on X @insightpartners.

The Shift to Truly Autonomous Identity Security: Introducing Autopilot Cover
AI Agents

The Shift to Truly Autonomous Identity Security: Introducing Autopilot

Mar 18, 2026

TL;DR

  • Traditional identity governance relies on periodic review cycles, but point-in-time checks detect risks and misconfigurations long after they are introduced. Organizations need to take a new, modern approach to securing identity.
  • Current AI-powered identity security systems are not autonomous. They show alerts and generate recommendations but rely on a human trigger before they start taking action.
  • Truly autonomous identity security is a fundamental shift, and that’s where Linx Security’s revolutionary new Autopilot AI comes in. Autopilot evaluates access, assesses risk, and either initiates remediation or escalates to a human when oversight is required.

What Are the Limits of Reactive Identity Security?

Reactive identity security and point-in-time checks can’t keep up with the constant change that characterizes modern identity environments, especially at scale. Employees change roles, contractors rotate in and out, and machine identities created to perform a specific task are no longer needed once the task is done.

Periodic review cycles made sense in a world where identity was changing slowly and the blast radius of a compromised account was limited. But today, a single compromised identity can cascade across different cloud environments, SaaS platforms, and CI/CD pipelines in minutes. 

The 2024 Midnight Blizzard breach at Microsoft proves this point. During this attack, threat actors compromised a single test tenant account, then moved laterally to high-value assets like cybersecurity team accounts and even executives’ accounts. 

The difficult truth? Identity is now the quickest path attackers can take to reach critical systems, and reactive security isn’t enough. (Learn more about why identity breaches are preferred by attackers here.)

How Do Identity Risks Emerge Between Reviews?

Identity risk arises from the slow accumulation of misconfigurations and access changes that happen between governance reviews.

Typically, role drift and privilege accumulation are the most common sources of identity risk in any organization. Even though an access grant for a specific engineer might have been legitimate when it was approved, permissions often persist long after a role change makes them irrelevant.

Access entitlements across multiple systems exacerbate this issue, as a single user might have multiple identities and permissions across different cloud providers, SaaS applications, CI/CD platforms, and other tools. 

Risks don’t live in these systems in isolation. Think of a user who has read-only access to a production AWS account but admin access to a CI/CD pipeline that can deploy resources to that account. Human reviewers and review tools that look at systems independently won’t catch this escalation path.

And the problem compounds when time enters the equation. When someone is granted permanent elevated access to address a particular issue instead of JIT admin access, the window between that change and the next governance review becomes especially dangerous. 

For example, a developer might get admin access to a production environment to help troubleshoot an outage. Though the incident is resolved within hours, the elevated permissions persist. 

If an attacker compromises this account, the blast radius can be significant: They’ll have access to all applications, secrets, and workloads that are running in that production environment. Identity solutions that conduct periodic reviews will eventually catch over-privileged access, but there might be months of exposure in the meantime.

Finally, department restructures happen all the time. In fact, with AI adoption, they’re more frequent than ever. These organizational changes shift the access context entirely. For instance, a team that used to need access to a particular environment may no longer exist in the same form. Despite this shift, their permissions usually stay in place until the next review cycle, resulting in over-privileged access on a team-wide scale.

What Is Reactive Tooling? What Is the Alternative?

Many enterprises believe that they’re keeping pace with risks because they’ve invested heavily in Identity Governance and Administration (IGA) platforms and Privileged Access Management (PAM) solutions. But these tools flag risks long after they’ve been introduced. 

Even the newer generation of identity security tools that have AI and machine learning (ML) capabilities still function as analysis engines. They identify issues and give you recommendations on how to solve them, but they don’t act on your behalf. 

Without automated provisioning and deprovisioning tied directly to lifecycle events, permissions drift between review cycles with no option to correct them.

The organizations that are effectively slashing identity risks are those embracing AI identity security automation in 2026: continuous, always-on coverage from autonomous AI that can detect, prioritize, and remediate access issues in real time, with minimal human oversight.

Why Should You Move From AI Assistance to Autonomous Execution?

Most of what the market calls today “AI-powered identity security” is actually AI-assisted security. As we’ve seen, these tools detect anomalies and generate recommendations. They might identify that a particular user has more privileges than most of their peers or that a service account hasn’t been used for a long period of time. These insights are useful, but AI-assisted tools leave a critical gap between identifying an issue and remediating it.

Depending on a human for input isn’t always the wrong move. Yet workflows where humans have to analyze and act on every notification from AI tools keep engineers trapped in a cycle of alerts. After all, human bandwidth will never be able to match the pace at which identity risks are growing.

To free engineers up to innovate and turbocharge remediation speed, autonomous systems handle straightforward fixes and repetitive actions. They determine when human input isn’t required by evaluating context. Then, they decide on an appropriate response and execute the corresponding workflow. 

By leveraging an autonomous security agent, the entire identity security workflow shifts from “send an alert and a recommendation to a human” to “assess the problem, decide what to do about it, and act accordingly.”

Introducing Autopilot

With Linx Security’s Autopilot, teams can now deploy AI agents that work continuously on their behalf: monitoring their identity environments 24/7, detecting meaningful changes as they happen, evaluating risk in context, and taking action in real time whenever there are issues.

What Does Autopilot Offer?

  • Speed and Control: Autopilot evaluates access, assesses risk, and either initiates remediation or escalates to a human when oversight is required, solving the speed-control paradox.
  • Governed Autonomy: Autonomy demands trust. Autopilot is designed with that in mind, featuring guardrails and intelligent oversight mechanisms that ensure each autonomous action is carefully controlled. 
  • Reduced Alert Fatigue: Unlike AI-assisted platforms, Autopilot reduces alert fatigue by looping in humans only when it’s truly necessary.
  • Task-Specific Agents: Each Autopilot agent is an expert at a core identity task, such as identification of access drift, profile tuning, and JIT access approvals.
  • A Comprehensive Suite of Tools: Autopilot is part of a three-tier AI architecture, alongside AI enhancements that constantly optimize and refine your data and AI Copilot, a personal AI assistant that makes engineers Linx system superusers.

“Security teams don’t need more noise—they need meaningful leverage,” says Niv Goldenberg, Chief Product Officer and Co-Founder at Linx Security. “Autopilot allows organizations to modernize identity security responsibly, combining continuous AI-driven execution with human expertise.”

Conclusion

In a periodic review model, there’s a gap between when identity risks emerge and when governance catches up. Access changes constantly, governance occurs quarterly, and attackers operate within this window.

With autonomous identity security, this gap is closed by autonomous agents that monitor access changes in real time, evaluate them against an organization's in-play policies, and take immediate action to resolve any issues. 

Autonomous identity security is where Linx stands apart.

“Autopilot marks the beginning of a new chapter for Linx,” says Israel Duanis, CEO and Co-Founder of Linx Security. “Our vision is to build a security platform that doesn’t just inform teams—it operates alongside them. The future of identity security isn’t more alerts or more manual reviews. It’s intelligent systems that continuously strengthen posture while keeping humans in control. This launch establishes Linx as a leader in autonomous identity security and sets the foundation for where our platform is headed.”

If you want to see Autopilot in action, join us for an in-person demonstration during the RSA Conference (March 23–26). We’ll also be hosting a live virtual demonstration on April 9th at 11 a.m. ET.

To see Autopilot live virtually, register for our upcoming webinar on April 9th: Autopilot: Closing the Identity Risk Gap with Autonomous AI, or schedule a demo to get a personalized demonstration.

Company News

Linx Security Launches Autopilot, Introducing Autonomous AI for Identity Security

Mar 18, 2026
Anatomy of an Identity Breach cover
Identity Security

Anatomy of an Identity Breach: The 7 Steps Attackers Repeat (With Real Examples)

Feb 9, 2026

TL;DR

  • Attackers typically follow seven steps to carry out an identity attack, and there are ways to protect yourself at each stage of the kill chain.
  • Always check if your credentials have appeared in data leaks and change them, implement phishing-resistant MFA, take advantage of JIT for admin accounts, and use the principle of least privilege.
  • Preventing attacks is just one piece of the puzzle; you should also take measures that limit the blast radius, ensure you can detect issues if they pass your prevention mechanisms, and leverage automated workflows that respond to issues.

Why Do Attackers Prefer Identity-Based Attacks?

Identity is now the fastest route to critical systems: Humans, non-human identities (like service accounts, workloads, and API keys), SaaS apps, cloud control planes, and AI agents all operate through permissions and tokens that can give attackers a dangerous foothold.

Raising the stakes, identity attacks are more likely to succeed than other attacks, and they’re also harder to detect. When a threat actor uses one of your credentials, they blend in with legitimate traffic, and most security tools miss the subtle signs that point to a compromise.

While it’s impossible to build perfect prevention against all of these attacks, you can implement ironclad defenses. The key is to take a layered approach. With defense–in-depth strategies in place, when one layer is compromised, another layer will block the attack, whether it stems from phishing, credential stuffing, token harvesting, or another identity attack vector.

In this article, we’ll explore the practical steps attackers take to compromise identities and provide hands-on advice for thwarting them at each stage of the identity kill chain.

What are the 7 Steps Attackers Use for Identity Breach?

Attackers typically follow these steps to carry out an identity attack:

  1. Initial access
  2. MFA or “friction” bypass
  3. Privilege gain
  4. Lateral movement via identity
  5. Persistence
  6. Taking action on objectives (data access, fraud, ransomware enablement)
  7. Evasion and reentry

Each step links together, enabling the next step in the chain. As a result, a minor compromise can lead to widespread breaches because of privilege escalation, lateral movement, and persistent actions.

Let’s take a look at each step in detail.

Step 1: Initial Access (Credentials or Foothold)

An attacker can obtain access to credentials through phishing campaigns, reused passwords, accidentally exposed secrets in VCS systems or CI/CD pipelines, or by purchasing compromised accounts on the dark web. 

Reused passwords are especially problematic. Despite security training programs, many employees continue to use the same passwords across personal and professional accounts. This practice creates a domino effect: Compromised access to one service compromises access to many others.

What’s a Real-World Example?

In 2021, attackers gained access to Colonial Pipeline’s systems by using a compromised password for a VPN account that didn’t have MFA enabled. This account actually belonged to a former employee, but it was never disabled after their termination. The threat actors used this foothold for a ransomware attack against the company, which provides fuel for about half of the East Coast. System outages cascaded into fuel shortages, and a state of emergency was declared in 17 states and Washington, D.C. Restoring operations took a $4.4 million ransom payment.

How Can Organizations Keep Systems Safe?

  • Prevent: Identify and disable all inactive accounts, as they can also pose security risks if compromised. Ensure MFA is enabled for all your users.
  • Limit the Blast Radius: Reduce the number of externally accessible services, and require additional passwords and MFA for anything important.
  • Detect: Monitor for unusual activity, like authentication attempts from unfamiliar locations or devices or numerous failed login attempts that signal credential stuffing.
  • Respond: Leverage automated workflows to immediately disable compromised accounts.

Step 2: MFA or “Friction” Bypass

MFA is just the first line of defense, and it’s not a silver bullet. When attackers encounter MFA, they can employ tactics to get around it. For example, fatigue attacks involve sending a flood of MFA approval requests to your users until they accept.

Social engineering isn’t the only risk, though. Phone-based MFA is vulnerable to SIM swap attacks, which could allow attackers to intercept your SMS codes.

What’s a Real-World Example?

In 2022, Uber experienced a data breach that began when a hacker purchased an employee’s credentials on the dark web. After encountering MFA, the attacker impersonated a security employee, initiated a fatigue attack, and asked the compromised user to accept the MFA requests he sent. Once the fatigue attack proved successful, the attacker gained access to Uber’s VPN; from there, he moved laterally, ultimately gaining full admin privileges.

How Can Organizations Keep Systems Safe?

  • Prevent: Use strong MFA mechanisms (Authenticator Apps, Hardware keys or Passkeys) for all accounts if possible, otherwise at least for privileged ones. Implement phishing-resistant MFA, and establish strict proof-of-identity requirements for help desk employees.
  • Limit the Blast Radius: Require multiple approvals for high-privilege account resets; require additional passwords for sensitive services.
  • Detect: Implement MFA monitoring that automatically denies a flood of requests, and require human approval (with identity verification) before users can add a new authentication device.
  • Respond: Whenever you detect suspicious MFA activity, temporarily restrict access for your user until verification is complete.

Step 3: Privilege Escalation

Accounts with permanent administrative rights are exactly what malicious actors are looking for. Instead of standing privileges, a better move is to grant temporary admin privileges through a mechanism like just-in-time access. 

Another problem to look out for? When secrets hygiene is not implemented consistently, and secrets like API keys are stored in VCS systems or wikis, there are simple opportunities for privilege escalation.

What’s a Real-World Example?

In October 2023, Okta experienced a breach after an attacker compromised a customer support engineer’s account. This account had administrative rights, allowing the attacker to view HTTP Archive (HAR) files containing cookies and session tokens uploaded by customers during support troubleshooting sessions. By stealing session tokens, the attacker was able to impersonate legitimate users across different organizations.

How Can Organizations Keep Systems Safe?

  • Prevent: Implement just-in-time (JIT) access for administrative accounts.
  • Limit the Blast Radius: Ensure admin accounts are specific to a single service and don’t have cross-service privileges.
  • Detect: Implement alerts for role changes or permission modifications.
  • Respond: Build in automation that responds to a suspicious account by revoking elevated access and reviewing recent actions.

Step 4: Lateral Movement via Identity (SSO, SaaS, Cloud Control Plane)

It goes without saying: When attackers gain elevated privileges, what they’re really gaining is the ability to move laterally through your connected systems. For example, a compromised SSO can unlock access to dozens of applications, and cloud control planes can be easily accessed from anywhere if you have valid tokens.

What’s a Real-World Example?

In 2023, an attacker known as Storm-0558 leveraged forged Microsoft authentication tokens to access enterprise email accounts. The mechanism of attack? Lateral movement from MSA (customer) keys to the Azure AD enterprise system. The breach affected approximately 25 organizations, primarily government agencies, including U.S. State Department email accounts. 

How Can Organizations Keep Systems Safe?

  • Prevent: Avoid creating “super admin” accounts that can access all your systems.
  • Limit the Blast Radius: Remove unnecessary permissions that might offer access to systems your users don’t actually need access to.
  • Detect: Implement monitoring for unusual access patterns, especially accounts accessing systems they’ve never accessed before.
  • Respond: When you detect lateral movement, isolate the compromised identity and review access logs.

Step 5: Persistence (Tokens, OAuth Apps, Service Principals, Backdoor Identities)

As soon as an attacker gains access to your systems, they’ll look for ways to maintain it if the original entry point is detected and blocked. Persistence techniques include the creation of OAuth applications, service principals, and API keys. These mechanisms are highly effective because they are often mistaken for legitimate administrative objects and can even survive password resets.

What’s a Real-World Example?

In 2025, Salesforce warned customers that a group called ShinyHunters was using vishing (voice phishing) to trick help desk staff into resetting MFA on privileged accounts. Once they got a foothold in a Salesforce instance, the attackers created malicious OAuth applications that allowed them to maintain persistent access.

How Can Organizations Keep Systems Safe?

  • Prevent: Control who can create OAuth applications, and establish lifecycle governance for service principals to ensure they have expiration dates.
  • Limit the Blast Radius: Restrict the permissions that can be granted to OAuth applications (for example, in AWS, use permission boundaries or service control policies to limit what IAM roles your OAuth apps can assume); ensure your service principals respect the principle of least privilege (PoLP).
  • Detect: Alert on the creation of new applications that require extensive permissions.
  • Respond: Maintain an inventory of authorized OAuth apps and service principals, and remove any new apps that are created outside of your process.

Step 6: Action on Objectives (Data Access, Fraud, Ransomware Enablement)

Identity compromise is rarely the final objective for an attacker. Usually, it’s only a stepping stone on the way to accessing data, committing fraud, or enabling ransomware.

What’s a Real-World Example?

In September 2023, MGM Resorts experienced a devastating ransomware attack that led to more than a week of operational problems across 30 resorts, like shutdown slot machines, offline ATMs, and locked-out guests (the downside of digital hotel keys). Attackers gained access by researching employees on LinkedIn, then calling the help desk to request a password reset in their names.

How Can Organizations Keep Systems Safe?

  • Prevent: Implement PoLP on both the infrastructure and data layer; require additional verifications before a user can perform sensitive actions (e.g., ask users to reauthenticate with MFA or ask them for a manager’s approval).
  • Limit the Blast Radius: Prevent the creation of “super admins.” If any exist in your systems, downgrade their privileges. 
  • Detect: Alert on mass downloads or unusual queries against sensitive databases.
  • Respond: Implement automation that can quickly restrict access when suspicious data is detected.

Step 7: Cover, Repeat, Expand (Defense Evasion + Re-Entry)

Powerful attackers try to reduce their visibility as much as possible by altering audit logs and disabling security tools. They also wreak havoc by creating multiple re-entry points. Many times, this goes unnoticed: In the wake of a breach, organizations can get tunnel vision and focus only on the initial entry point.

What’s a Real-World Example?

In 2023, a threat group called LockBit demonstrated impressive defense-evasion techniques, accounting for $91 million in ransomware payments in the U.S. alone. The secret to their success? They played the long game. When they gained access to their victims’ systems, they didn’t deploy ransomware right away; they first covered their tracks and expanded their foothold. Malware deployment and ransom demands came weeks or months later.

How Can Organizations Keep Systems Safe?

  • Prevent: Implement audit logging, and forward logs to immutable storage.
  • Limit the Blast Radius: Ensure that no one can disable security monitoring, not even for testing purposes.
  • Detect: Alert on log-retention policy changes and treat them as high-priority security incidents.
  • Respond: Implement automation that can quickly revoke access for a compromised identity across all systems.

What Are Best Practices for Reducing Identity Breaches?

Follow this checklist to cut your identity risk:

  • Start by Gaining Visibility: You can’t protect what you don’t see, so inventory your identity sprawl and identify password-only external access.
  • Review Admin Privileges: Determine who has admin rights, and analyze if they actually need all those permissions.
  • Test How Fast You Respond to Issues: Identify how much time it takes to revoke all access for a specific identity. Use this test result as a baseline for improvement.
  • Deploy Phishing-Resistant MFA: Phishing-resistant MFA needs to be implemented everywhere, as attackers often compromise lower-priority systems first and then move laterally.
  • Eliminate Exposed Credentials and Leaked Secrets: Scan your code repositories, wikis, and shared documents for exposed credentials. Implement automated scanning in your CI/CD pipelines to prevent secret leaks.
  • Protect Audit Logs: Audit logs should be stored in immutable storage to ensure they cannot be altered after creation.
  • Create Alerts: Alert on role changes, app consents, unusual MFA behavior, and federation changes.
  • Implement JIT Elevation: You don’t need persistent admin permissions. Administrative access should be granted on demand for a specific time period.

Conclusion

Identity breaches are the easiest way in for attackers, and they usually follow a predictable pattern.

To disrupt this pattern, shifting left with stronger prevention is a start, but it’s not enough. You’ll also need to build powerful detection capabilities and automate quick responses to threats. Your motto should be, “Make it harder to get in, harder to escalate, harder to persist, easier to detect, and faster to contain.”

At Linx Security, we help organizations build robust identity security that addresses each stage of the attack chain. Book a demo with one of our engineers to learn more about how we can keep your systems safe from identity breaches.

AI-Native Databases: The Missing Layer Behind Reliable CoPilots Cover
AI Agents

AI-Native Databases: The Missing Layer Behind Reliable CoPilots

Jan 30, 2026

For the past two years, I've been building agents that expose data residing in different databases. Here, I'd like to share some actionable insights I've gathered along the way.

At Linx, we had to handle extremely high-scale databases for large enterprises. Building agents that perform well with low latency and high accuracy is hard, and the list of challenges is long.

Which model should you use? Should you fine-tune? How do you consume historical query data, and should you perform active learning? What about orchestration, do you go with an agentic framework or keep it vanilla? How do you respond quickly to investigations running against high-scale databases? And how do you rationalize cross-domain information spanning Business, Security, Governance, and Compliance?

These are just a few of the questions we had to answer.

While all of these topics are important, I'd like to focus on a different angle, one that turned out to be even more crucial.

When building an agent, engineers tend to equip it with tools that allow it to query the data, expose the schema, and assume that the agent will perform well from there. However, that's not the case.

Imagine you're exposing a schema to a junior analyst who's proficient in your database query language. Will they be able to answer questions about the data correctly? In reality, no. In the following sections, I'll explain why not, and how we solved it.

What Differentiates AI from Humans?

Jeremiah Lowin, in his excellent talk, presents criteria for how LLMs differ from humans when consuming data from APIs. Here's my version for the database problem, which is slightly different:

Real-Life Examples of Non-AI-Friendly Cases

Bad Naming: We had a field called is_external, which actually means “is the email domain external to the organization.” It does not mean the user is external, it's a property of the email itself. That naming alone caused repeated mistakes when the AI was asked about external users. The AI assumed the user was a guest, leading to incorrect security audit reports.

Different Lingo: We use a graph database. The relationship between a user and their accounts was represented by an edge named owner_of. But the relationship between a user and their secrets was named responsible_for. When someone asks "Who owns this secret?", the agent tries to generate a query using owner_of, even though we explicitly mentioned what types of edges exist and how they operate. As a result, the query returned no results, even though the data existed.

Design for Performance: We have an accounts collection, where each account represents a human in a specific application. We chose to keep the application name and data in another collection, storing only the app ID in the account document so that one could join them to get the app name if required. This was done to support the use case of app renaming without migrating many documents. In reality, since 98% of queries from accounts required the app name, this caused a huge waste of tokens as the same join query was generated over and over again. (We also found it to be non-performant for the non-agent use case as well.)

Fields That Shouldn't Be Exposed: We had many internal and legacy fields for feature flags, processing states, migration leftovers, and version counters. Things like read_for_processing and migrated—humans learn to ignore them. In some cases, the agent treats them as meaningful and starts weaving them into answers; in others, we're just wasting tokens.

Why Database Schemas Are Built This Way

Your database dialect is set by how your company talks and names things, but customer language doesn't always match your schema's language. The moment users ask questions their way, the gap shows up immediately. While engineering teams optimize for performance, storage, and clean abstractions—all valid priorities—AI agents need something entirely different: clarity and self-explanatory semantics. This disconnect persisted because, until recently, these systems were not customer-facing, and engineers who had to query the data saw no reason to complain.

Why Building a View Is Not Enough

In the MCP presentation, the suggestion is to curate a new API to be consumed by an agent alongside the existing API. One might say, "Let's build a view on top of the existing database and enforce these concepts there." However, I don't think this approach works here, for several reasons:

Views drift. New fields get added to the real model, someone forgets to update the view, and suddenly the CoPilot can't answer questions about new features. For customers, if the product and the AI's "understanding" (the view) diverge by even 5%, the Agent becomes unreliable.

Views might require duplicating the data, which is costly (depending on the type of view).

The concepts here aren't only good for an agent, but for anyone trying to access the database. The same mistakes made by an agent are also made by engineers who build queries and assume they correctly understand the schema.

This doesn't mean we can't create new fields to be consumed solely by the AI, or that there won't be fields the AI should ignore. But the majority of data should be streamlined with a process designed to ensure it is AI-Native. There's nothing more frustrating than finding out a day after a new feature was released that it's not exposed by the CoPilot.

Why Can't We Use a DAL (Data Abstraction Layer)?

A Data Abstraction Layer (DAL) is a software layer that sits between the application and the database, providing a simplified interface that hides the complexity of data storage and retrieval.

A DAL addresses many of the issues raised above. It focuses on outcomes, inner joins are already set, fields that should be ignored are removed, and it's usually explainable and optimized for performance.

However, using a query language is almost like writing code. You can do much more, and DALs are always limited by how they were designed and built. With open-ended queries, the possibilities are as broad as the database creator allows—which is usually what customers expect when asking a CoPilot about their data.

DALs are rigid; AI needs the flexibility of SQL but the safety of a DAL.

How We Solved These Challenges at Linx

At Linx, we weren't just dealing with flat tables; we were managing a massive Identity Graph. This added a layer of complexity where "truth" isn't found in a single row, but in the relationships between disparate domains—merging Business, Security, Governance, and Compliance data into a single coherent view.

We decided to build multiple tools to help our CoPilot answer customer questions. On the database side, we expose everything by default—new fields require a description and should be immediately discoverable by the agent. Alongside this, we also expose built-in APIs to save time for simple cases.

We have different mechanisms for reducing end-to-end latency around RAG and active learning, but I won't go into them in this post, as they target a different angle of how to improve CoPilot performance and reliability and would require another blog.

Engineers can decide to hide fields from the CoPilot explicitly.

The AI-Native Data Lifecycle: From Code to Production

*Yes I used Gemini to make this ridiculous diagram, it seemed only fitting given the topic. And it gets the job done!

1. Intentional Design: The journey begins with a cultural shift in how our engineers view data. During the initial design phase, engineers must explicitly classify every new field as either exposable or restricted. This ensures that data exposure is never a side effect, but always a conscious decision.

2. Static Enforcement: We utilize static analyzers to enforce our documentation standards: if a field is marked for exposure but lacks a clear description, the build is blocked. This rigid enforcement prevents "schema drift," ensuring that no new data points are silently added or forgotten without a clear contract.

3. Agentic Semantic Validation: We have developed a custom internal agent specifically designed to validate our data integrity. Rather than relying on basic syntax checks, this agent performs deep semantic analysis:

Consistency Checks: Validates that field names align perfectly with their descriptions.

Logic Verification: Analyzes calculated fields to ensure the underlying logic matches what the name implies to an LLM.

Confusion Matrix: Proactively flags near-duplicate fields or ambiguous naming conventions that could cause "hallucinations" or mix-ups during inference.

4. Production Monitoring: Finally, we maintain standard production monitoring to identify and resolve any edge-case issues or anomalies in real time.

Many Databases, Many Truths

Okay, so far so good, right? I wish it were that easy.

As I continued building, I found that this gets harder as systems mature. Real products don't query a single database. You have an operational DB, an analytics store, a warehouse, a lakehouse, documentation, and now APIs via MCP. The same concept ends up in multiple places with slightly different names or shapes. The model has to guess whether account, tenant, and org are the same thing or three different ones. We check for that too: the same entity exposed under different names across different sources, creating ambiguity.

Principles for AI-Native Infrastructure

To sum it up, when building CoPilots that run Text2SQL tasks, we should follow principles that make the CoPilot more reliable (alongside the well-known database metrics we follow, such as performance). Just as we follow SOLID principles when writing code, below is a suggested modified SOLID (or SDDID) for AI-Native infrastructure:

Semantic Naming: Table and field names must be self-explanatory. If is_external refers to an email domain and not a user's status, it must be renamed or aliased for the AI.

Dialect Alignment: The schema should match the mental model of the user. If your customers ask about "Ownership," don't hide that relationship behind technical jargon like responsible_for. Your database dialect must speak the same language as your business.

Documentation: Every exposable field must have a description attribute. This metadata shouldn't live in a separate Wiki; it should be part of the database contract.

Intentional Exposure: Not all data is for AI. Use "AI-Exposability" flags to hide internal flags, version counters, and migration leftovers that confuse the model and waste tokens.

Drift Detection: Implement automated "Semantic Tests" in your CI/CD. If a new field is added without a description or violates naming conventions, the build fails. AI-readiness is a first-class citizen.

n8n Credential Sharing: Ownership vs Control Cover
Identity Governance

n8n Credential Sharing: Ownership vs Control

Jan 15, 2026

Explicit Sharing, Implicit Availability, and Administrative Impersonation

Automation platforms rely on credentials to execute actions under a user’s identity. In n8n, credentials are created by individual users and are commonly perceived as personal, especially when they live in a user’s personal project and aren’t explicitly shared.

In a multi-user n8n instance, though, “who created the credential” and “who can use the credential” are not always the same thing. Credential availability is governed by several mechanisms. Some are explicit and intentional. Others are implicit and easier to miss. The difference matters because it directly impacts identity use, accountability, and auditability.

The observations described here are based on direct experimentation in the n8n Cloud environment.

Credential sharing models in n8n

n8n provides multiple ways in which credentials can be used by users other than their creator. These mechanisms differ in consent, visibility, and scope.

Explicit credential sharing

A credential owner can explicitly share credentials:

  • With specific users
  • With specific projects

Once shared, other users can execute workflows using those credentials.

This model is straightforward: the owner makes an intentional sharing decision, and other users gain access within the boundaries of what was shared.

Implicit credential availability

In addition to explicit sharing, credentials may be usable by others through implicit relationships.

Implicit case 1: Project-wide credential usage

If user A (member or admin) creates credentials in a project, other members of that project can execute workflows using those credentials, acting on behalf of user A.

This applies even if the credentials were not explicitly shared with each individual member.

In this case, credential usage remains scoped to the project, but the key detail is that “created in the project” can function like “available to the project,” even when individual user-to-user sharing never happened.

Implicit case 2: Instance admin usage across personal projects

A user can create credentials in their personal project and avoid sharing them with any other user or project.

Despite this, an instance admin can:

  • Create a workflow in the admin’s personal project
  • Select the user’s personal, unshared credentials
  • Execute workflows using those credentials

This does not require:

  • Credential sharing
  • Shared project membership
  • Notification to the credential owner
  • Additional consent

In other words, credentials created in personal projects are still usable at the instance level by administrators. Practically, “personal” describes where the credential is stored, not who ultimately controls its use.

OAuth example: Slack credential reuse

OAuth-based integrations make this behavior especially observable because consent is usually visible to the user.

Here’s the sequence:

  1. A user authorizes n8n to access Slack via OAuth
    • Tokens are issued under the user’s identity
    • Credentials are stored in the user’s personal project
  2. The user executes workflows using those credentials
  3. An instance admin creates a workflow in the admin’s personal project
    • The admin selects the user’s Slack credentials
    • No OAuth re-consent occurs
    • The user is not notified
  4. The workflow executes
    • Actions occur under the user’s identity

Jack, a member of the organization, creates Slack credentials scoped to his personal project.

Jack does not share his credentials with anyone:

Jack approves the scopes:

Later on, the instance admin’s own credential set is visible and does not include credentials for Slack.

When the admin creates a new workflow in the admin’s personal project and adds a Slack node, the list of available credentials includes credentials created by other users. Among them are credentials created by Jack, a member of the organization, which were created in Jack’s personal project and not explicitly shared.

The admin can select Jack’s credentials and execute the workflow.

When the workflow is run by the admin, using Jack’s credentials, the resulting data corresponds to Jack’s Slack context. In an experiment we conducted, this allowed the admin to access sensitive information from Jack’s Slack channels, including messages containing salary-related information.

From Slack’s perspective, there is no distinction between actions initiated by the user and actions initiated by an admin using the user’s credentials.

What we validated in practice

In our test, an instance administrator was able to create a separate workflow and run it using another user’s Slack OAuth credentials that were stored in that user’s personal project and not explicitly shared. The workflow executed successfully without re-consent and without alerting the credential owner.

This is the core point: an admin can operationally “become” the user inside downstream systems, because downstream systems only see “valid token for user X,” not “who clicked run in n8n.”

n8n documentation and credential control

n8n states in its public GitHub repository:

“Instance owners and instance admins can view and share all credentials on an instance.”

That statement accurately describes administrative visibility and sharing capabilities.

What it does not explicitly state is that instance admins can execute workflows using credentials created by other users, including credentials stored in personal projects and not shared.

“View and share” and “run as the user” are materially different operations. The former suggests administrative oversight. The latter enables administrative impersonation in downstream systems.

Implications

This model introduces several effects that are easy to underestimate:

  • Actions are attributed to the credential owner. Downstream logs reflect the user whose token is used.
  • Credential owners cannot distinguish their actions from those performed by admins using their credentials.
  • OAuth consent granted by a user applies beyond that user’s control. The token’s effective authority can be exercised by others.
  • Credential boundaries align with instance authority rather than user intent. “I didn’t share it” is not the same as “nobody else can use it.”

These effects result from the credential availability model, not from a vulnerability or misconfiguration.

Trust model

n8n’s design assumes that instance administrators are trusted to operate using any credentials present in the instance.

In some environments, that assumption is acceptable. In others, especially those with shared administration, separation of duties, strict audit requirements, or regulated workflows, it may not match organizational expectations.

If your internal model is “admins manage the platform but cannot act as end users,” n8n’s behavior conflicts with that model.

Why this matters for agentic governance

Agentic governance has become an increasingly important issue. Automation tools are increasingly used as execution layers: they connect systems, move data, and take actions with delegated authority. Whether you call them “automations,” “workflows,” or “agents,” the governance question is the same:

Who can cause actions to occur under a given identity, and can you prove who did what?

In this model, credential ownership (who created the credential) can diverge from credential control (who can actually use it). That gap is where audit and accountability get blurry.

Takeaway

In n8n, credentials can be made available through:

  • Explicit sharing by the credential owner
  • Implicit availability within a project
  • Implicit availability to instance administrators across personal projects

While n8n documents that admins can view and share credentials, it does not explicitly document that admins can execute workflows using unshared personal credentials and act under another user’s identity.

Organizations operating n8n should treat credential storage and OAuth authorization as instance-level trust decisions, and communicate that clearly to users.

This paper describes behavior, not intent. It clarifies where credential control actually resides.

Agentic governance with Linx

Most IAM teams already have a set of routines that work: access reviews, ownership assignment, least privilege, and remediation. Linx incorporates agentic identities into the same platform you use to secure and govern human and non-human identities across SaaS, cloud, and on-prem applications. That keeps visibility and governance consistent, even as identity types expand. 

Read more about it here: Agentic identity discovery and governance with Linx

Everything You Need to Know About Agentic Identity Cover
AI Agents

Everything You Need to Know About Agentic Identity

Jan 8, 2026

Executive TL;DR

With AI agents on the rise, every IAM and security stakeholder must understand that agentic identities aren’t just non-human identities (NHIs) with better automation. They’re ambitious actors who never stop working.
The core shift: risk shows up in the sequence of actions over time, not in any single login, token, or entitlement change.
To govern agents, IAM needs visibility into decision loops and delegation chains, not just credentials.

Old identity assumptions no longer hold

Most IAM programs were built around a clean divide:

  • Humans authenticate, receive entitlements, and operate within organizational norms and limits.
  • Non-human identities execute narrowly defined workloads with permissions scoped in advance.

Agentic identities fit neither category because they:

  • Are persistent decision-makers
  • Plan, adapt, and act across multiple steps
  • Decide what to do next, which systems to touch, and which tools to use based on goals and context
  • Act without waiting for explicit instructions at each turn

That changes what “identity” looks like in practice. Identity does not show up as a single event. It unfolds as behavior, delegation, and a chain of access decisions that change while the agent operates.

Why this breaks traditional IAM

Traditional IAM secures moments in time, such as logins, role assignments, and token issuance. That works when access attempts are made by humans at human speed, or are machine-driven and tightly scoped.

Agents operate between those moments. They decompose goals into steps and discover and request access as they go. One permission unlocks the next. Each request can look reasonable alone, but risk lives in the sequence.

And the security problem often begins after access is granted. With long-running agents, credentials rotate, but intent persists.

What agentic identities are, and what they’re not

Agentic AI systems may comprise a single agent or multiple agents operating in coordination to achieve an outcome. These agents determine paths, tools, and systems to use, and then act on those decisions without continuous human oversight.

That autonomy changes how identity behaves. Unlike traditional NHIs, agentic identities change as agents reason, adapt, and interact with different parts of the enterprise environment.

Agentic identities are

  • Goal-driven decision loops that keep operating until an objective is met
  • Cross-system actors that express identity through actions, not just a single credential
  • Delegation-heavy workflows that often operate through chains of service accounts, roles, tokens, and tools

Agentic identities are not

  • Just “smarter bots” running a fixed script
  • A single service account or workload identity you can understand in isolation
  • Automatically low-risk because credentials are short-lived

How agents consume access in practice

Agents don’t approach a goal by requesting every permission upfront. They break the goal into steps. Each step reveals which tools, permissions, or data they need next, so access is discovered rather than predefined.

Traditional models grant access first and assume that usage comes later. Agents often reverse that flow. They attempt an action, learn what’s missing, and acquire the next layer of access.

Micro-example (how compounding happens):

  • Agent starts with read-only analytics access
  • It hits a limit and requests config-read access to diagnose the issue
  • It determines a change is needed and requests write access in one system
  • It delegates the fix to a deployment identity that has broader privileges

Each step can look reasonable alone. The chain is the risk.

Tool connections can create implicit privilege

Many agents do not authenticate to tools using OAuth tied to the current user. Instead, they use long-lived secrets, API keys, tokens, or service credentials configured in the workflow or agent environment.

This can create a mismatch between who is allowed to run the agent and what the agent can do. A user may have limited direct access to an application, but by running an agent that already has a permanent credential configured, they can trigger actions they should not be able to perform.

From the outside, it still looks like normal tool usage. The risk is that the permissions are effectively inherited through configuration, not granted through an approval process tied to the user and the moment.

Why short-lived credentials don’t eliminate identity risk

Short-lived credentials reduce exposure, but they don’t remove accountability gaps.

Agents rely on temporary tokens, delegated credentials, and role sessions. Tokens expire and sessions rotate, but an agent’s intent persists. As one credential ends, another replaces it.

IAM records separate events tied to different identities, but in reality, it is one actor continuing the same reasoning. The question becomes how access combines across identities to reach an outcome. Without that connection, accountability breaks and intent is invisible.

Delegation chains are where context disappears

Delegation is how agents operate at scale, and it’s where visibility erodes fastest.

Agents work through chains of delegation: impersonating service accounts, exchanging API tokens, and federating into workload identities. Each handoff can be valid. Each hop can behave as designed.

The problem is context loss. Most IAM systems record these as separate events. A role assumption here. A token exchange there. What they miss is the sequence that connects them. Without that thread, identity becomes fragmented execution instead of a single decision-maker spanning systems.

Security teams then reconstruct behavior after the fact, inferring intent from logs never meant to explain it.

What this looks like in a real enterprise

Agent permissions in the enterprise can look ordinary until viewed collectively.

Scenario 1: “Fix the performance issue”

An agent detects a performance issue with limited analytics access, assumes a diagnostic role, identifies a configuration problem, and fixes it using a separate deployment identity.
IAM logs each step under a unique identity. None appears risky alone. The real identity is the agent coordinating the entire sequence.

Scenario 2: “Prepare a report”

An agent is asked to prepare a quarterly report. It pulls metrics from a BI tool, then discovers it needs customer fields from a CRM. A developer previously connected the CRM using a long-lived API secret so the agent can “just work.” The agent exports the dataset to a collaboration tool to share with stakeholders.
Each system sees a legitimate action. The risk is the combined outcome: sensitive data moved across systems using credentials that are not tied to the user running the agent, through a chain no single control point evaluated end-to-end.

This is why agentic identities are easy to underestimate. Their behavior only makes sense in the context of a continuous decision loop.

Start here this week

If you do nothing else, start by making agentic identity visible and governable in the places where risk concentrates.

Start here checklist:

  • Inventory where agents already operate (workflows, platforms, SaaS copilots, internal tools)
  • Identify which identities they execute through (service accounts, roles, tokens, workload identities)
  • Inventory agent tool connections and how they authenticate (OAuth vs long-lived secrets)
  • Identify run-context mismatches (who can run an agent vs what its configured credentials allow it to do)
  • Flag high-impact actions (IAM changes, data exports, production deploys, privilege grants)
  • Define guardrails for those actions (allowed tools, allowed targets, approval requirements)
  • Require traceability through delegation chains (correlate hops back to the originating agent or workflow)

This is foundational. You cannot govern what you cannot map.

How risk scales in agentic environments

Agentic identities accumulate risk faster because access is discovered, chained, and delegated across systems and identities.

Compounding access creates sideways expansion

Privilege expansion becomes autonomous when agents optimize for task completion unless guardrails restrict decision scope.

This can begin with narrow permissions. As the task develops, those permissions unlock the need for additional access. The expansion is not always a single obvious escalation. It can be accumulation across tools and identities.

Least privilege cannot be treated as a one-time entitlement decision. For agents, it has to be enforced continuously for high-impact actions, and evaluated in the context of the decision loop.

Coordination multiplies visibility gaps

Multi-agent coordination multiplies risk because identity context fractures as tasks are delegated across many agents.

Agents collaborate to complete complex workflows. One analyzes data, another diagnoses infrastructure, and a third applies changes. Each may operate under different identities, tools, or domains.

Everything can look normal to individual systems. What disappears is the thread connecting actions into one decision flow. Fragmented visibility lets risk spread even when no single agent appears over-privileged.

Misuse scenarios that matter

New scenarios show up when influence becomes as important as direct access:

  • An agent nudges another agent to execute a privileged action, and each individual step appears legitimate.
  • An agent splits actions across systems so no single log stream looks suspicious, but the combined outcome is harmful.

These are identity problems because they depend on delegation paths, chains of authorization, and missing end-to-end context.

Why audits and investigations struggle

Audits struggle because execution is easier to record than reasoning.

Logs capture events. They rarely capture why decisions were made or how actions connect across systems. Traditional IAM trails flatten behavior into isolated entries, forcing teams to infer intent after the fact.

In practice, the hardest investigations involve tool connections. If an agent used a stored secret, the audit trail often shows the tool identity, not the human who triggered the run, and not the full chain of connected systems. This is why correlation across the full flow matters, not just collecting more logs.

In an agentic world, accountability depends on connecting:

  • the originating objective
  • the chain of identities used
  • the sequence of actions
  • the final outcome

Without that, you can have “complete” logs and still have incomplete answers.

How to explain agentic identity internally

Agentic identity is hard to govern if teams can’t describe it consistently.

One-minute explanation for executives

Agents behave like autonomous digital workers. They can be fast and independent, but risk comes from decision speed multiplied by visibility gaps. IAM must connect intent and execution so you can understand what the agent was trying to do, what identities it used, and what it changed.

Explanation for architects and engineers

Agentic identity works in a loop:

  • An agent forms intent
  • It discovers missing access
  • It executes through an identity (often an NHI)
  • It produces an outcome
  • The loop repeats until the objective is met

Governance has to follow that loop. Visibility, boundaries, and traceability need to apply across steps, not just at login.

Misconceptions to stop repeating

  • Short-lived credentials don’t automatically mean low risk when intent persists.
  • Agents aren’t just smarter bots. They’re decision-makers operating across systems.
  • When behavior drives access, risk shows up in the chain, not a single event.
  • Agents are not just NHIs.

What this means for your IAM program

The rise of agentic identity necessitates a new approach.

  • Treat decision sequences as first-class security objects, not just entitlements.
  • Correlate actions across tokens, roles, and NHIs back to the originating agent or workflow.
  • Define guardrails for what decisions agents can make as context changes, especially for high-impact actions.

This allows autonomy and delegation to be treated as identity behaviors that require supervision and traceability.

Conclusion

Agentic identities change the shape of identity risk. The biggest shift is not that agents authenticate differently. It is that they operate continuously, discover access as they go, and act through delegation chains that fragment context.

If IAM continues to govern only logins, roles, and tokens in isolation, it will miss the behavior that matters. The path to control starts with visibility into where agents run, what identities they use, and how multi-step decisions connect to outcomes.

Agent discovery with Linx: visibility with context

Linx provides Agentic Identity Discovery and Governance to close that gap, and give teams a single platform to security and govern human, non-human, and agentic identities.

Along with your human and non-human identities, you can discover which agents are in use, who has access to them (humans or other agents), and what those agents have access to.

Who is allowed to run an agent vs what the agent can reach

Agents often carry permanent access through long-lived secrets and pre-wired tool connections. Linx helps you see mismatches between who can execute or influence an agent and what the agent’s configured credentials allow it to do. This makes implicit privilege visible so teams can remediate it.

Agent to app flow mapping across connected tools

Developers connect agents to many apps quickly, often through generic HTTP and API components or custom connectors. Linx correlates agent activity into an end-to-end flow from agent to application to resource. This helps teams understand blast radius, spot unexpected connections, and investigate multi-step behavior without stitching evidence together manually.

Which agents exist and who owns them?

Get an inventory of agents and quickly flag ones without clear owners.

Who can access the agent?

You can see which accounts have access to an agent, which helps you understand who can run it, manage it, or influence its behavior.

What does the agent have access to?

You can review the permissions and resources the agent can reach, including signals that help you prioritize what to look at first, and how the agent got the access in the first place (API token, service account, OAuth, etc.).

Bring agentic identities into your existing governance program

Most IAM teams already have a set of routines that work: access reviews, ownership assignment, least privilege, and remediation. Linx incorporates agentic identities into the same platform you use to secure and govern human and non-human identities across SaaS, cloud, and on-prem applications. That keeps visibility and governance consistent, even as identity types expand.