SUBSCRIBE NOW SUPPORT US
When your AI helper becomes a risk

When your AI helper becomes a risk

Published on

The AI assistant drafting emails, summarizing reports or organizing schedules may be boosting productivity, but it could also be creating unseen risks.

According to data from Microsoft, more than 80 percent of Fortune 500 companies now deploy active AI “agents,” software tools that can independently perform tasks, often built using low-code or no-code platforms. Increasingly, these tools are not just created by IT teams but by everyday employees experimenting with automation in their daily work.

From finance and retail to manufacturing and education, AI agents are becoming embedded in routine workflows worldwide. But as adoption accelerates, visibility is lagging behind.

Microsoft’s Data Security Index shows that only 47 percent of organizations have implemented specific security controls for generative AI. In a multinational survey of over 1,700 data security professionals commissioned by the company, 29 percent said employees are already using unsanctioned AI agents for work tasks.

Security teams have identified new threats, including “memory poisoning,” where attackers subtly manipulate AI assistants by embedding harmful instructions that persist in future responses.

logo
Daily Tribune
tribune.net.ph