The Digital Ship is Full of Leaks : Read full details.

Cybersecurity

Cybersecurity will be greatly impacted by the integration of machine learning (ML) and artificial intelligence (AI). Threat detection, anomaly detection, and automated response systems driven by AI will advance in their ability to recognize and neutralize cyber threats.

Cybersecurity experts will face additional challenges in creating strong defenses from adversarial AI and ML assaults.

Bloomberg News says that cyber firm Wiz is considering selling shares at a price of up to $20 billion.

A view of the Google logo on a building in San Salvador

America Bloomberg News reported on Tuesday that Israeli cybersecurity company Wiz is considering selling its current shares for a price between $15 billion and $20 billion.

 The article also stated, citing persons familiar with the situation, that the corporation is discussing a deal that would allow current shareholders to submit between $500 million and $700 million of their holdings.

Wiz and Alphabet, the parent company of Google, concluded negotiations earlier in July over a planned $23 billion transaction, which represented roughly double the amount the cybersecurity company had disclosed in May when it secured $1 billion in a private investment round.

The article also mentioned G Squared, Thrive Capital, and Lightspeed Venture Partners as venture companies interested in the possible acquisition.

Third Ivanti Vulnerability Recently Discovered and Used in the Wild

In the wild, an attack is being used to compromise Ivanti’s Virtual Traffic Manager application delivery controller. In the last two weeks, Ivanti customers have been warned about this problem three times

A significant Virtual Traffic Manager (vTM) authentication bypass vulnerability that enables a remote, unauthenticated attacker to create an administrator account is the most recent, CVE-2024-7593. 

On August 12, Ivanti released updates for CVE-2024-7593. A few days later, the business revised its warning to let clients know that although it was not aware of any in-the-wild exploitation, a proof-of-concept (PoC) attack had been made public.

SecurityWeek is aware of no public reports detailing attacks employing CVE-2024-7593 as of this writing, although CISA listed the vulnerability to its Known Exploited Vulnerabilities (KEV) Catalog on Tuesday.

In addition to patches, Ivanti has released guidelines for reducing exploitability and indications of compromise (IoCs). It hasn’t updated the advice to include a mention of malicious exploitation, though.

This year, ZoomEye has seen 164 internet-exposed Ivanti vTM instances, the bulk of which are in the United States and Japan, compared to Census’ 97 reports of 97.

CVE-2024-7593 was added to CISA’s KEV list immediately after CVE-2024-8963 and CVE-2024-8190, which damaged Ivanti’s Cloud Services Appliance (CSA) and which have been chained for unauthenticated remote code execution. Threat actors frequently use Ivanti product vulnerabilities for their own malicious purposes. There are already 20 entries for Ivanti vulnerabilities on CISA’s KEV list; some of them have been used to spread backdoors, while others have been used to breach well-known companies like MITRE and CISA.

The AI Wild West: Untangling the Security and Privacy Dangers Associated with GenAI Apps 

What security and privacy issues arise from GenAI users uploading data to more than eight applications each month?

The usage of generative AI in the workplace has skyrocketed; according to estimates, 25% of workers have either tried or are now utilizing AI there. However, it has been difficult to obtain precise information about the particular apps being used and their intended uses by employees. For this reason, we looked at a randomized sample of 1,000 employees of business companies who had used at least one GenAI app over the previous three months.

  • Users of GenAI go “all in.”

We discovered that workers typically go “all in” after they begin using GenAI, with users contributing data to 8.25 applications monthly. We can classify around 18.9% of these as “power users” since they utilize more than 12 apps. On the other hand, 10% of people only utilize one.

Monthly patterns might sometimes be a sign of more significant changes. The number of applications utilized in July was 11% less than in June, which might mean that staff members are honing in on the apps they think would work best for their use cases.

There are presently 5,020 amazing (and most likely unsustainable) GenAI or GenAI-enabled tools in use. Of them, 25% offer assistance with content creation, editing, summarization, and translation; 18% are business tools like Grammarly, Slack, and Notion; and 13% offer customer support services like streamlining. 

It should come as no surprise that ChatGPT is the most widely used app among users—84 percent of the sample reported using it in July. Compared to Google Gemini, which is the second most popular app that is utilized by 14% of users, this is 6X more popular. Claude, Perplexity, and Microsoft Copilot are among the others that are prominently featured. 

  • For use cases, content is king.

By examining user prompts, we also aimed to evaluate the manner in which and why employees use GenAI apps. With almost 47% of prompts requesting applications for assistance in this area, it soon became evident that “Content creation, summarizing, or editing” is the business case that has the strongest support. With 15% of the market, software engineering is the second most popular field. Other areas that rank highly include business and finance (7%), problem-solving and troubleshooting (6%), and data interpretation, processing, and analysis (12%). 

  • Approximately 33% of applications claim to use user data for training.

The usage of GenAI carries several possible security and privacy issues. Upon examining all 5,020 active applications, we discovered that 30.8% of them claim to train their models using user data, implying that any provided sensitive data might be utilized for this purpose. Moreover, fewer than 1% have a “Trust Center,” which allows users to quickly view important security and privacy options.

Organizational Best Practices

  • Merely being aware of the hazards is insufficient when it comes to AI and data privacy. Thus, it’s critical to take decisive action to safeguard priceless data assets. The following are essential best practices to follow:
  • Continual Evaluations: To learn about the data practices of the applications used within your company, conduct routine audits of them.
  • Unambiguous Policies: Create and implement explicit regulations for AI and data usage.
  • Instruction for Users: Inform staff members about the dangers of utilizing AI technologies and the best ways to do so.

“The Digital Ship is Full of Leaks. But There Are Ways to Keep it Afloat” The difficulties presented by AI are only going to grow with time. But, these difficulties may be overcome and GenAI’s potential can be fully realized while protecting data, provided that proactive, data-centric security measures are implemented and kept up to date.

Leave a Reply

Your email address will not be published. Required fields are marked *