Skip to main content
Please wait...

Hello Friends,

As digital ecosystems continue to expand, information warfare has emerged as a significant challenge for governments, organizations, and individuals worldwide. The manipulation, dissemination, or distortion of information through digital platforms can influence public perception, disrupt democratic processes, and undermine trust in institutions. With the growing use of artificial intelligence, social media amplification, and coordinated disinformation campaigns, the risks associated with information manipulation have intensified.

Regulators and policymakers are increasingly focusing on transparency, accountability, and platform responsibility to mitigate these threats. By strengthening governance frameworks, promoting media literacy, and encouraging responsible technology deployment, organizations and societies can safeguard the integrity of information in the digital environment

Enjoy reading!

 

 

Privacy Enforcement

Pennsylvania: Consumer Data Privacy Act Introduced to House

Pennsylvania lawmakers introduced the Consumer Data Privacy Act to the House, aiming to establish a comprehensive framework governing the collection, processing, and protection of personal data. The proposed legislation outlines key obligations for businesses acting as data controllers and processors, including transparency in data practices, limitations on the processing of sensitive personal information, and mechanisms enabling consumers to exercise rights such as access, correction, and deletion of their personal data. The bill also proposes accountability measures, including data protection assessments and enforcement by state authorities. If enacted, the legislation would strengthen privacy protections for residents and contribute to the growing patchwork of state level data privacy laws in the United States. Read More

New York: Bill for Generative AI Notices Passes Senate

The New York Senate passed a bill requiring organizations deploying generative artificial intelligence systems to provide clear notices when users interact with AI generated content. The proposed legislation aims to enhance transparency by ensuring that individuals are informed when text, images, or other outputs are produced by automated systems rather than human actors. Lawmakers introduced the measure in response to growing concerns around misinformation, synthetic media, and the potential misuse of AI generated content. If adopted into law, the requirement would obligate companies to implement disclosure mechanisms and strengthen accountability in AI deployments, reflecting increasing regulatory attention toward the governance of emerging artificial intelligence technologies. Read More

 

Data Breach

Italy: Garante Fines Intesa Sanpaolo €17.6 Million for Unlawful Processing of Data

Italy’s data protection authority, the Garante per la Protezione dei Dati Personali, imposed a €17.6 million fine on Intesa Sanpaolo for unlawful processing of personal data. The investigation found that the bank processed customer information without a valid legal basis and failed to ensure compliance with key principles under the GDPR, including lawfulness and data minimization. Regulators also identified deficiencies in internal governance and oversight mechanisms relating to data handling practices. The enforcement action highlights the heightened expectations placed on financial institutions to implement strong privacy compliance frameworks and demonstrates regulators’ willingness to impose substantial penalties for violations involving large scale processing of personal data. Read More

Spain: AEPD Fines FC Barcelona €500,000 for Lack of DPIA in Biometric Census Update

Spain’s data protection authority, the Agencia Española de Protección de Datos (AEPD), fined FC Barcelona €500,000 for failing to conduct a Data Protection Impact Assessment in relation to the update of its biometric census system. The authority determined that the processing involved sensitive biometric data used for identity verification, which poses high risks to individuals’ privacy and therefore requires a prior DPIA under the GDPR. The absence of a proper assessment meant that potential risks to data subjects were not adequately evaluated or mitigated. The case reinforces regulatory expectations that organizations must assess privacy risks before deploying technologies involving biometric or other high risk personal data processing. Read More

California: CalPrivacy Fines Ford Motor Company $375,703 for Opt Out Process Violations

The California Privacy Protection Agency imposed a $375,703 fine on Ford Motor Company for deficiencies in its consumer opt out mechanism under the California Consumer Privacy Act. The investigation found that Ford failed to provide an effective and accessible method for consumers to exercise their right to opt out of the sale or sharing of personal information. Regulators determined that the company’s processes created barriers that prevented users from easily submitting opt out requests. The enforcement action highlights the importance of implementing user friendly privacy controls and demonstrates the growing regulatory scrutiny around consumer rights mechanisms under evolving U.S. state privacy laws. Read More

 

Privacy in Spotlight

UK: ICO Publishes Letter to Social Media and Video Sharing Platforms on Strengthening Age Assurance

The UK Information Commissioner’s Office (ICO) issued a formal letter to social media and video sharing platforms urging stronger implementation of age assurance measures to protect children online. The regulator emphasized that platforms must adopt reliable mechanisms to determine whether users are minors and ensure that age-appropriate safeguards are applied. The guidance aligns with the UK Children’s Code, which requires organizations to design digital services with high privacy protections for young users. The ICO highlighted the need for privacy by design, transparency, and minimization of data collection. The communication reinforces regulatory expectations that platforms prioritize child safety and responsible data processing practices.   Read More    

Uzbekistan: AI Regulation – Legal Framework, Compliance Obligations, and Practical Implications

Uzbekistan is advancing its regulatory approach to artificial intelligence by developing a legal framework governing the use and deployment of AI technologies. The proposed measures aim to establish clear compliance obligations for organizations utilizing AI systems, particularly in areas involving personal data processing, algorithmic transparency, and accountability. The framework is expected to introduce requirements for risk assessments, oversight mechanisms, and safeguards against discriminatory or harmful outcomes resulting from automated decision making. By introducing structured governance for AI technologies, Uzbekistan seeks to promote responsible innovation while ensuring alignment with international standards on privacy, data protection, and ethical use of emerging technologies. Read More

 

Regulations

Poland: Bill Implementing NIS2 Directive Signed by President

Poland’s President signed legislation implementing the European Union’s NIS2 Directive, strengthening the country’s cybersecurity and digital infrastructure protection framework. The new law expands the scope of entities subject to cybersecurity obligations, including organizations operating in critical and important sectors such as energy, finance, healthcare, and digital services. It introduces stricter requirements for risk management, incident reporting, and security governance, while granting national authorities enhanced supervisory and enforcement powers. Organizations covered by the regulation will be required to adopt robust cybersecurity controls and ensure timely reporting of significant incidents. The legislation aligns Poland’s national framework with broader EU efforts to improve resilience against evolving cyber threatsRead More

Czechia: New Cybersecurity Act Enters into Effect

Czechia’s new Cybersecurity Act has entered into effect, introducing a modernized legal framework to strengthen national cybersecurity governance and align domestic law with the European Union’s NIS2 Directive. The legislation expands regulatory oversight to additional sectors and organizations considered critical to national infrastructure and digital services. It establishes clearer obligations for entities regarding risk management, incident detection, and mandatory reporting of cybersecurity incidents. The law also enhances the authority of national cybersecurity regulators to supervise compliance and impose penalties for violations. The updated framework aims to improve the resilience of essential services and strengthen protections against increasingly sophisticated cyber threats. Read more

Maine: Bill to Enact the Maine Online Data Privacy Act Passed in Senate

The Maine Senate passed the Maine Online Data Privacy Act, advancing legislation designed to strengthen consumer privacy protections within the state. The proposed law introduces obligations for businesses that collect or process personal data, including requirements related to transparency, data minimization, and safeguards for sensitive information. It also grants individuals rights to access, correct, and delete their personal data, as well as the ability to opt out of certain forms of data processing. If enacted, the legislation would place Maine among the growing number of U.S. states adopting comprehensive privacy laws, reflecting increasing regulatory momentum toward stronger consumer data protection frameworks. Read More