"Twenty years ago, most of these words didn’t even exist. Now they run the world. Think about that." --YNOT!
Agent
A software system that does not just answer questions but takes actions. It can read, write, click, send, search, call tools, and complete tasks.
Agentic AI
AI built to act, not just chat. It can make decisions, use tools, connect to services, and carry out multi-step work.
Autonomous Agent
An AI agent that can operate with limited human supervision. The less it has to ask permission, the more useful it becomes—and the more dangerous it can become when something goes wrong.
AI Assistant
A general term for software that helps a user with tasks. Some assistants only talk. Others are closer to autonomous agents with real system access.
API
A way for one system to talk to another. If an agent uses an API, it may be able to read data, send commands, or trigger actions in outside software.
Attack Surface
All the possible entry points a hacker can target. Every tool, plugin, API, file connection, email link, and permission adds to the attack surface.
Authentication
The process of proving identity. Usually this means usernames, passwords, tokens, or keys that tell a system, “Yes, this user or tool is allowed in.”
Authorization
The rules that determine what a user, tool, or agent is allowed to do after it gets in.
Assume Breach
A security mindset that says you should behave as though the attacker will get in eventually. Build defenses so one failure does not become a full disaster.
Audit Trail
A recorded history of what happened in a system. This helps you figure out who did what, when they did it, and whether something suspicious occurred.
Blast Radius
How much damage a compromised system can do. A tightly controlled agent has a small blast radius. A fully trusted agent with wide access can cause a big one.
Canary Token
A fake, unique piece of data placed in a system so that if it shows up somewhere else, you know something leaked.
Chatbot
An AI system that mainly talks with the user. It may still make mistakes, but it is usually less dangerous than an autonomous agent because it has fewer tools and less authority.
Compromise
When a system, model, tool, account, library, or service has been corrupted, manipulated, or taken over by an attacker.
Context
All the information the model sees at a given moment—your prompt, previous messages, documents, tool outputs, instructions, and outside content. If the context is poisoned, the model can be misled.
Credentials
The digital “proof” used to gain access to systems, such as passwords, access keys, API tokens, or login cookies.
Data Exfiltration
The theft of data from a system. This is one of the most serious risks because it can happen quietly, without obvious signs.
Dependency
A piece of outside code or software your program relies on. If that dependency is compromised, your software can become compromised too.
Direct Prompt Injection
A prompt injection attack where the attacker talks directly to the model and tries to trick it into ignoring its normal instructions.
Drift
When an agent or model slowly behaves differently over time due to changing inputs, updated tools, altered prompts, or new context. Sometimes harmless. Sometimes a warning sign.
Endpoint
A device or system connected to a network, such as a laptop, phone, server, or cloud app. Agents often interact with multiple endpoints.
GitHub
A popular platform for storing, sharing, and updating code. Very useful. Also a place where attackers look for weak points in open-source software.
Guardrails
Rules or technical controls meant to limit what an AI system can say or do. Helpful, but not magical. A determined attacker may still find ways around them.
Hallucination
When an LLM confidently makes something up. In code, this may mean inventing library names, functions, or facts that do not exist.
Human in the Loop
A setup where a person reviews or approves important actions before the agent completes them. This slows things down a little, which is often cheaper than cleaning up a catastrophe later.
Indirect Prompt Injection
A hidden attack placed inside outside content—like webpages, PDFs, emails, or documents—that the AI later reads. The attacker is not talking directly to the model. He is poisoning what the model consumes.
Inference
The process of the model generating an answer or action based on the prompt and context it receives.
Jailbreak
An attempt to get a model to ignore its rules, restrictions, or safety instructions and do something it was not supposed to do.
Key
A secret code used to access systems, APIs, or encrypted data. If an attacker steals a key, he may not need a password.
Least Privilege
A security rule that says a system should only have the minimum access needed to do its job. Not the whole kingdom when all it needs is the front porch.
Library
A reusable chunk of software code written by someone else. Libraries make development faster, but they also widen the supply chain and create risk.
LLM (Large Language Model)
The core language engine behind many AI tools. It predicts and generates text based on patterns. It can sound wise while still lacking judgment, suspicion, and common sense.
LLM Compromise
When an attacker manipulates what the model sees, trusts, or does. This often happens through prompt injection, poisoned context, malicious tools, or compromised external data.
Logging
Keeping records of system activity. Logs can reveal what the AI did, what tools it called, and where things started going sideways.
MCP
A system that allows an AI model to connect to outside tools, services, and functions. In practical terms, it gives the model hands instead of just a mouth.
MCP Compromise
When the tool-connection layer is hijacked, altered, or abused. The agent then follows poisoned instructions from a source it believes is trustworthy.
Middleware
Software that sits between systems and helps them communicate. In AI security, middleware can sometimes be used to filter, inspect, or block risky requests.
Model Distillation
A process where a smaller model learns from a larger one. This can be legitimate, but in some cases it may be used in questionable or abusive ways.
NPM
A widely used package manager in the JavaScript world. It makes software installation and updates easier, which is wonderful until poisoned packages slip into the stream.
Open Source
Software whose source code is publicly available. This can improve transparency and collaboration, but it also gives attackers a clear map of what many systems are using.
Package
A bundle of code distributed for reuse. Packages save time, but if one is malicious or compromised, it can infect many downstream systems.
Package Manager
A tool that installs, removes, and updates code packages. Useful, efficient, and an excellent place for supply chain attacks when misused.
Permission Abuse
When an agent or tool uses access it was given for the wrong purpose. Sometimes because it was tricked. Sometimes because no one thought to restrict it.
Phishing
A fake message designed to trick a person into giving up information, clicking a malicious link, or taking an unsafe action. AI is making phishing more personal and more convincing.
Plugin
An extra feature or component added to software. Every plugin extends capabilities, but also creates another opening for error or attack.
Poisoned Context
Information fed into a model that contains hidden instructions, false assumptions, or malicious content designed to manipulate the model’s behavior.
Privilege Escalation
When an attacker gains more access than they were supposed to have. A small foothold becomes a bigger one, and soon the house keys are missing.
Probabilistic Security
Security in AI is not purely yes-or-no. Because LLMs work on probabilities, the goal is usually to reduce risk as much as possible, not pretend perfection exists.
Prompt
The instruction or input given to an AI model.
Prompt Injection
A technique where an attacker hides instructions in text or content so the model obeys them as though they were legitimate commands.
Prompt Stack
The combination of all instructions influencing the model at once—system prompts, user prompts, tool instructions, retrieved data, and conversation history.
Rate Limiting
A control that restricts how often a system can make requests. This helps slow down abuse, brute-force attacks, and runaway automated behavior.
Remote Code Execution
A serious security problem where an attacker gets a system to run code on their behalf. That is often the moment a bad day becomes a memorable one.
Retrieval
When a system pulls outside information—documents, search results, notes, files—to help the model answer or act.
Rogue Tool
A plugin, MCP server, package, or service that behaves maliciously or has been altered to do so.
Sandbox
An isolated environment where software or an agent can run with tight restrictions. This limits what it can touch and reduces damage if it gets compromised.
Secrets
Sensitive pieces of data like passwords, API keys, tokens, and credentials that should never be exposed in prompts, logs, or code.
Session Token
A temporary credential that tells a system a user is already authenticated. If stolen, it can sometimes let an attacker skip the login step entirely.
Slop Squatting
When attackers register fake package names that AI coding tools hallucinate, then wait for the agent or developer to install them.
Social Engineering
Manipulating people rather than systems. Instead of breaking the lock, the attacker convinces someone to open the door.
Software Supply Chain
The full chain of code, libraries, tools, packages, updates, services, and infrastructure that your software depends on. Attackers often target the weakest link rather than the main product.
Supply Chain Attack
An attack on something your system trusts—like a package, library, update server, dependency, or tool—so the damage flows downstream into your software.
Telemetry
Data collected about how a system behaves. Useful for performance and security monitoring, as long as it is gathered and reviewed properly.
Third-Party Tool
Any outside tool or service the agent relies on that you did not build and fully control yourself.
Tool Call
When an AI agent reaches beyond itself to use a function, service, plugin, or MCP-connected capability.
Tooling
The collection of outside functions, packages, services, and utilities an agent or developer uses to get work done.
Trusted Tool
A tool the agent has been configured to believe is safe. If that trust is misplaced, the danger becomes much greater.
Vector Database
A database used to store and retrieve information based on similarity, often used in AI retrieval systems. Useful, but dangerous if poisoned or poorly secured.
Vibe Coding
Writing software by leaning heavily on AI suggestions and generated code without deeply inspecting every part. Fast and productive—until the shortcut takes you through a minefield.
Workflow Automation
Using software or AI to trigger actions automatically across systems. Great for productivity. Also great for spreading mistakes at machine speed.
Zero Trust
A security model that says nothing should be automatically trusted, even inside your own environment. Every request should be verified.
Zero-Day
A previously unknown vulnerability with no patch available yet. Attackers love these because defenders start the race already behind.
© 2025 insearchofyourpassions.com - Some Rights Reserve - This website and its content are the property of YNOT. This work is licensed under a Creative Commons Attribution 4.0 International License. You are free to share and adapt the material for any purpose, even commercially, as long as you give appropriate credit, provide a link to the license, and indicate if changes were made.







