AI tools such as ChatGPT, Google Gemini, Claude and Microsoft Copilot are also known as generative AI. This is because they can generate new content.
Initially this was a text reply in response to a question, request, or you starting a conversation with them. But generative AI apps can now increasingly create photos and paintings, voice content, compose music or make documents.
People from all works of life and industries are increasingly using such AI to enhance their work. Unfortunately so are scammers.
In fact, there is a product sold on the dark web called FraudGPT, which allows criminals to make content to facilitate a range of frauds, including creating bank-related phishing emails, or to custom-make scam web pages designed to steal personal information.
More worrying is the use of voice cloning, which can be used to convince a relative that a loved one is in need of financial help, or even in some cases to convince them the individual has been kidnapped and needs a ransom paid.
There are some pretty alarming stats out there about the scale of the growing problem of AI fraud.
Reports of AI tools being used to try to fool banks’ systems increased by 84% in 2022,, external accounting to the most recent figures from anti-fraud organisation Cifas.
It is a similar situation in the US, where a report this month, external said that AI “has led to a significant increasing the sophistication of cyber crime”.
Credit: Source link