Large language models are now built into search, customer service, legal work and education, but a new study warns that their rapid spread is outpacing safeguards. The study says these systems pose growing risks to privacy, security, misinformation, bias, and accountability, and that no single fix can fully contain them.
The study says the main threat is not just bad outputs but also a broader system of vulnerabilities, including prompt injection, data leakage, hallucinations, and manipulation. It argues that managing those risks will require layered governance, stronger technical controls, human oversight and clearer regulation.
Read the original article here: https://www.devdiscourse.com/article/technology/3885943-ai-hallucinations-bias-and-data-leaks-expanding-llm-risk-landscape