A controversial facial recognition company has just informed its customers of a data breach in which its entire client list was stolen.
Clearview AI leapt to fame in January when a New York Times report claimed that the start-up had scraped up to three billion images from social media sites to add to its database.
That makes it a useful resource for its law enforcement clients, which can query images they capture against the trove. The FBI’s own database is said to contain little more than 600 million images.
Now those clients have been exposed after an unauthorized intruder managed to access the Clearview AI’s entire customer list, the number of user accounts those companies have set up, and the number of searches they’ve carried out. However, they apparently didn’t get hold of client search histories.
Interestingly, the firm claimed that its own servers, systems and network weren’t compromised.
In a statement sent to The Daily Beast, company attorney, Tor Ekeland, claimed that security is the firm’s top priority.
“Unfortunately, data breaches are part of life in the 21st century. Our servers were never accessed. We patched the flaw, and continue to work to strengthen our security,” he added.
Clearview AI is coming under increasing pressure from privacy activists and social media companies.
The latter have reportedly demanded the firm “cease and desist” from its web scraping activity as it breaches their terms of service, although the firm claims it is a First Amendment right to collect publicly available photos.
The firm has also been forced to deny rumors that consumers could also use its service to find out personal information including address details of people whose images they possess.
Tim Mackey, principal security strategist within the Synopsys CyRC (Cybersecurity Research Center), argued that cyber-criminals will now view compromise of Clearview AI’s systems as a priority.
“I would encourage Clearview AI to provide a detailed report covering the timeline and nature of the attack. While it may well be that the attack method is patched, it also is equally likely that the attack pattern is not unique and can point to a class of attack others should be protecting against,” he added.
“Clearview AI possesses a target for cyber-criminals on many levels, and is often the case digital privacy laws lag technology innovation. This attack now presents an opportunity for Clearview AI to become a leader in digital privacy as it pursues its business model based on facial recognition technologies.”
Now is the time to review your exposure to GDPR and CCPA-related lawsuits, and review contracts related to penetration testing.
In a talk at RSA Conference in San Francisco exploring recent cyber-related court cases, Julia Bowen, senior vice-president, general counsel and corporate secretary, The MITRE Corp and Professor Rick Aldrich, cybersecurity policy and compliance analyst, Booz Allen Hamilton, reviewed a number of issues relating to border control, surveillance and online page removals.
“If you are under the GDPR or the CCPA, makes sure you’re doing that correctly,” Aldrich said, referencing cases where page takedowns were disputed by search engines over local laws.
He also recommended checking if you are collecting biometric data, and the legality of doing that, referencing a recent case where the Illinois Supreme Court dismissed a case that would have pared back a state law limiting the use of facial recognition and other biometrics. “If you are doing worldwide business that involves people in Illinois, you may want to check that,” Aldrich advised.
He also recommended reviewing your penetration testing laws, considering the recent case of the Coalfire employees being arrested whilst on an exercise in Iowa.
In the coming months, Aldrich recommended taking actions to update your organization’s policies to minimize risk with regards to personal information, cloud providers and cross-border data transportation. Aldrich and Bowen listed a number of issues related to these cases, including where personal devices are seized and owners are ordered to unlock them.
“If you travel internationally, you may be asked to surrender equipment and risk giving up information to the government,” he said. “If they seize equipment, you may not have it anymore.”
Finally, Aldrich recommended taking actions to update your organization’s policies to minimize risk with regards to insurance providers, especially where payouts were not made due to what was determined to be an act of war. “Some people are now saying that they don’t have an exclusion for an act of war, so be very careful to check that they will pay out,” he said. “There are a lot of companies that are not expecting to pay out $50m when NotPetya occurs.”
It’s time to get rid of parental controls and let younger people make their own decisions.
Speaking in the opening keynotes at the RSA Conference in San Francisco, Wendy Nather, head of advisory CISOs, Duo Security at Cisco, said that parental controls need to be disabled as “we need to teach them to make good security choices for themselves because they need to learn this from a young age.”
As part of her keynote, Nather said that she does not use parental controls at home, but her teenage daughter asked for them to be turned on “to help enforce her study time,” so they were set up for her time, and Wendy controls the password.
“We have to teach them to make good security decisions, as we keep making the same mistakes year after year,” she said, saying this was done with web servers, mobile, and IoT, and this is because of the demographic. “We have to teach everybody, so it doesn’t matter who comes in with new technology, they know how to apply the security controls.”
She concluded by saying that it has to be about “security of, by, and for the people as we’re the ones who have been working on this for decades.”
Speaking at the RSA Conference in San Francisco on how to build a comprehensive Internet of Things (IoT) security testing methodology, Rapid7 IoT research lead Deral Heiland said that it is currently hard to determine what IoT is, so he built a testing model to determine the traits of IoT so they can be better detected and secured.
He said that he often asks companies if they have got any IoT technology, so created a methodology to define the traits of IoT, which is based on four key areas:
- Management control—to control and manipulate data
- Cloud service APIs and storage
- Capability to be moved to the cloud
- Embedded technology
He said that you have the ability to better defend your ecosystem if you know the traits of IoT, and can build a methodology to build and test IoT:
This is about finding information and gathering knowledge, as “there is no way to test your IoT ecosystem if you don’t know how it works.”
Heiland said that once you have done a functional evaluation, you can do a larger reconnaissance to look at what is going on, use open source intelligence to see what frequency the communication is running at and what components it was running, and if they had any notable vulnerabilities and exploits in the past.
The next stage is testing, including web-based penetration tests, scans, and more manual tests of the build, including looking at physical ports. Looking at the firmware, Heiland recommended analysis testing to look for hardcoded keys, passwords, undocumented command structures, IP addresses, and hardcoded URLs of interest. He also recommended doing radio frequency (RF) testing, as most IoT “have some form of this.” This can also determine if the communications are encrypted and effective, and find RF protocols. He also recommended looking at pairing and over-the-air updates.
Heiland admitted that one test does not work for all IoT, and elements will need to be changed for different products, as “you find new things every time and new ways of doing things.”
In one case study, he presented an analysis of a smart door lock. The idea of it is to provide short-term access via email, so he set up a Man in the Middle attack using Burpsuite to create a certificate, “as the mobile app didn’t have SSL, so it was simple to create a certificate and gain Man in the Middle access and see communications flowing back and forth.”
He said that he was able to see the communications, such as how the API returned control keys to all users, which was written to the developer debug log and available via a file on the phone. “We didn’t need to root the device as all of the data was in there, this had a session token so in theory you could control the lock forever.”
He explained that this issue has now been patched, and he declined to reveal the vendor name.
In terms of who can do this sort of testing, he said he would expect a person to be a “seasoned tester at a bare minimum” as well as have hardware skills, budget for kit, and “an endless desire to learn.”
Heiland said there are three elements needed in order to get to a better stage of IoT security. These were for manufacturers to implement a product security testing program and test before a product goes to market and for those that are available, bring them back in-house and test them.
Also, enterprise consumers should ask questions of the vendor, inventory their IoT, define what IoT is to their organization, and assign ownership.
The final element is for IoT researchers and testers to follow Heiland’s methodology and improve their own skills sets.
In a talk at the RSA Conference in San Francisco, students and researchers from University of California, Berkeley presented a theoretical method on how voters could be influenced using technical and automated methods.
Talking about “How AI Inference Threats Might Influence the Outcome of 2020 Election,” the three presented their own research, which included aggregating data to show how misinformation can be spread. Karel Baloun, software architect and entrepreneur at UC Berkeley, said these types of attacks can be nefarious as “attacks on democracy” are often not seen and it can be denied that they took place.
Pointing at the 2016 US presidential election, Baloun said that the hacking of the Democrats’ emails by Russia and passing of them to WikiLeaks “set the narrative for the election” and there is proof that this effort was able to “suppress over 100,000 votes.” He said that there are four examples of elections that have been influenced in history:
- The 2016 Ukraine Election
- The 2016 UK Brexit vote on EU Membership
- The 2019 Hong Kong Anti-Extradition Law Protests
- The 2020 Taiwan Presidential Election
Ken Chang, cybersecurity researcher at University of California, Berkeley, said that when someone registers to vote, that information should be trusted to be held securely, as all information that is collected is “a critical piece of information.”
With voter registration data, Chang said that the potential of a data breach is obvious, so the conversation needs to be centered on how to protect information, and not on how a data broker can collect and distribute information without the person knowing.
Baloun said that with the experiment it was able to build user voter databases and aggregate this into social media data, advertising, and messaging to influence people. Citing the case of Cambridge Analytica, Baloun said that it was able to use Facebook data that was open, and personal information that is freely obtained and available in the form of credit scores and credit card data.
Saying that it is only a matter of time before AI can do the whole process, as currently Machine Learning is used on Big Data sets, and AI can generate texts and emails and write news, Baloun said that the “technology is well advanced.”
“If you suck the firehose you only get what you’re provided,” Baloun said, pointing out that it could be easy for an attacker to impersonate an influential friend or family member.
Looking at steps to take, Baloun encouraged taking more action when friends and family share such information, and think about what you consume. He also called for the Secretary of State with responsibility for voter records to mandate a disclosure requirement. He also called for a ban by the FFC on creating “personal profiles” pretending to be voters.
“Each one can make a big difference, as the system depends on easily available rich voter profiles, and targeting with messaging,” he said. “To protect democracy we need to make things more expensive and less effective and let humans intervene, as they don’t know it is happening.”
How can the US deter other nations from executing cyber-attacks? According to a panel of US government officials speaking at the RSA Conference in San Francisco, there is a range of legal, diplomatic, and even military options that can be considered.
Adam Hickey, Deputy Assistant Attorney General, National Security Division at the US Department of Justice (DOJ), commented that there is a lot that can be done to deter nation-states from conducting cyber-attacks.
"Law enforcement is one tool of federal power and should be used to deter threat actors," Hickey said.
Hickey noted that he knows in many cases even if a state threat actor is charged in a legal indictment, an arrest won't be made. That's why the DOJ is using other legal instruments that can disrupt operations, including court orders to seize infrastructure.
That infrastructure, however, can be anywhere in the world, which is a challenge that Steven Kelly, Chief of Cyber Policy, Cyber Division for the Federal Bureau of Investigation (FBI), brought up. Kelly noted that because of the complexity of cyber-attack infrastructure attribution is often complex.
"Some people might scoff at the idea that we can deter nation-state cyber-attack activity, because the attacks keep happening, but we're working on it," Kelly said.
Kelly added that multiple agencies have been working together to get faster at identifying who is behind an attack and then working together to impose consequences more rapidly. He emphasized that it takes a lot of cooperation within the US government and with other law enforcement groups around the world to get all the facts that enable the FBI to identify threat actors behind an attack.
"Nations and the individuals that are working on their behalf can no longer assume that they can operate with anonymity," Kelly said.
Secret Information and Public Indictments
Among the assets that the US government has engaged to help deter nation-state cyber-attacks is the intelligence community, though much of their work still needs to remain secret, commented Thomas Wingfield, Deputy Assistant Secretary of Defense for Cyber Policy at the US Department of Defense (DOD).
Wingfield noted that while the DOD can't reveal everything about its operations it can and does help other agencies to keep the country safe.
Information from the public is also a key part in helping with deterrence. Hickey commented that in recent years, as companies have matured in their own cybersecurity process, attacked companies have disclosed information to the government that is critical to helping with attribution.
In the final analysis, Wingfield emphasized that deterrence isn't just about lawsuits or projecting power in some way with a retaliatory action. Rather, in his view deterrence is about influencing would-be attackers to make a different decision.
"At the end of the day, deterrence is meant to work in one place, and that is inside the human element, inside of the brain of the adversary decision maker," Wingfield said.
Cyberattacks can impact individuals and companies in different ways, but few if any industries have the same life-or-death impact as medical devices.
In recent years, medical devices and hospitals have come under increasing attack from different threat actors, which has not escaped the notice of regulators in the United States. At the RSA Conference in San Francisco, the safety implications of medical devices was detailed, along with direction on how things could well be set to improve in the years ahead.
"If those vulnerabilities aren't taken care of, devices can potentially be exploited, and that can result in patient harm or serve as a pivot point to get into a hospital network."
The risk to medical infrastructure is far from a theoretical threat. In 2017, the WannaCry Ransomware attack had devastating consequences in the UK, shutting down NHS operations and hospitals. There have also been publicly reported flaws in medical devices that vendors have been slow to fix. Perhaps the most well-known example occurred with Abbott Laboratories and its St Jude cardiac pacemakers.
Chase added that even when patches are available for known issues, patching medical devices is often far from routine, with many hospitals unaware that they are vulnerable.
How Medical Device Security Will Get Better
The US Food and Drug Administration (FDA), together with MITRE and other stakeholders, has been engaged in multiple efforts to improve the state of medical device security. Chase noted that in 2018 the Medical Device Safety Action Plan was published by the FDA, which includes a number of action items for device manufacturers. Among the primary items is a requirement that firms build capabilities to update and patch device security into a product's design. The plan also requires that device manufacturers have coordinated disclosure polices in place in the event of a vulnerability.
Margie Zuk, Senior Principal Cybersecurity Engineer at MITRE, commented that a key challenge with medical device cybersecurity is making sure that the vulnerabilities are understood with the right amount of detail. To that end, MITRE has been developing a Medical Device Rubric for Common Vulnerability Scoring System (CVSS) that has been submitted to the FDA.
Another current effort is to help hospitals build out their preparedness for cybersecurity incidents like WannaCry. Zuk noted that with WannaCry, for example, there was a lot of confusion between hospitals and manufacturers about risk. To help with that type of situation in the future, MITRE has developed a playbook to help hospitals with incident response.
A key challenge for understanding the risk is related to testing under different scenarios. That's where Zuk said that the Medical Device Cybersecurity Sandbox effort comes into play as an effort to help validate vulnerabilities in clinical scenarios.
Software Bill of Materials (SBOM) Will Help
One of the key efforts under way in 2020 is a multi-stakeholder effort led by NTIA for a Software Bill of Materials (SBOM). With SBOM, software in medical and other devices would need to have a list of constituent components that are included.
"SBOM is really critical to understand if you have a vulnerability in your system," Zuk said. "Hospitals need to know what the attack surface is and what's at risk."
Fundamentally, the key to improving medical device cybersecurity is reducing risk and understanding the potential for exploitation.
"It's a shift in thinking about how a device is supposed to be used, to how a device can be exploited by a malicious adversary that it trying to abuse the device, " Chase concluded.
Australian Federal Police (AFP) could be given powers to cyber-spy and hack into online computer systems used by criminals based in Australia under a new proposal being considered by the country's federal government.
Suggested changes would allow the AFP to call for assistance from the Australian Signals Directorate (ASD) or extend the cyber-capabilities of the AFP.
Currently the ASD only has the power to hack, disrupt, and destroy foreign cybercriminal activity, as the agency is banned from spying or hacking into online systems based within Australia.
This situation means that agents who come across cybercriminal activity linked to a server based in Australia must immediately stop investigating it, no matter how serious the offense being committed.
Supporters of the proposed changes say they could help the ASD hunt down sexual predators and pedophiles who use servers in Australia for their cybercriminal activity.
"At the moment, if there is a server in Sydney that has images of a five- or six-month-old child being sexually exploited and tortured, then that may not be discoverable, particularly if it's encrypted and protected to a point where the AFP or the ACIC (Australian Criminal Intelligence Commission) can't gain access to that server," Home Affairs Minister Peter Dutton told the Australian Broadcasting Corporation.
"It can be a different picture if that server is offshore, so there is an anomaly that exists at the moment."
Reports of online child exploitation in Australia have increased massively in the past decade. Last year, the AFP received 17,000 referrals for online child exploitation material, compared to just 300 received in 2010.
A single referral can cover any amount of material, ranging from one image of a child being abused to up to thousands of videos and images.
Dutton said he wanted to put an end to cybercriminals' operating in Australia with impunity.
"We are seeing the rape and torture of our children, all for sexual gratification," said Dutton. "I want to make sure that if they [the police] can get a warrant from a court and go to a pedophile's house and search that house for material . . . I want to make sure we have the same power to do that in the online life of that pedophile."
The US Department of Defense announced yesterday that it has adopted a series of ethical principles regarding the use of artificial intelligence (AI).
Designed to build on the US military’s existing ethics framework, which is based on the US Constitution, Title 10 of the US Code, Law of War, existing international treaties, and longstanding norms and values, the principles will apply to both combat and non-combat functions.
Embracing high-level ethical goals, the principles state that AI should only be used by the DoD in a way that is responsible, suitable, traceable, reliable, and governable.
Under the new principles, DoD personnel will be expected to "exercise appropriate levels of judgment and care, while remaining responsible for the development, deployment, and use of AI capabilities," and "take deliberate steps to minimize unintended bias in AI capabilities," according to a statement released yesterday by the DoD.
The principles are based on a set of guidelines on the ethical use of AI published in November 2019 by the Defense Innovation Board. These guidelines—the result of 15 months of consultation with leading AI experts in commercial industry, government, academia, and the American public—were first provided to Secretary of Defense Dr Mark Esper in October.
"The United States, together with our allies and partners, must accelerate the adoption of AI and lead in its national security applications to maintain our strategic position, prevail on future battlefields, and safeguard the rules-based international order," said Secretary Esper.
"AI technology will change much about the battlefield of the future, but nothing will change America's steadfast commitment to responsible and lawful behavior. The adoption of AI ethical principles will enhance the department's commitment to upholding the highest ethical standards as outlined in the DOD AI Strategy, while embracing the US military's strong history of applying rigorous testing and fielding standards for technology innovations."
The principles align with efforts by the Trump administration to advance AI technologies. Last year, President Donald Trump launched the American AI Initiative, a national strategy for leadership in artificial intelligence. The initiative aims to discover and promote innovative uses for AI while protecting civil liberties, privacy, and American values.
New research into malware affecting mobile devices has found that stalkerware and adware posed the biggest threat to users in 2019.
The annual "Mobile Malware Evolution" report, published yesterday by Kaspersky, shows a significant increase in the number of attacks on the personal data of mobile device users. From 40,386 unique users experiencing attacks in 2018, the figure rose to 67,500 in 2019.
Mobile advertising Trojans were a major threat, with the number of detected installation packages that use this type of malware nearly doubling over the course of the year from 440,098 to 764,265. However, researchers found that the rise in attacks was not caused by classic spyware or Trojans, but by a massive spike in the amount of “so-called stalkerware.”
Often promoted as parental surveillance tools, stalkerware apps are installed without the device owner’s consent to secretly stream the victim’s personal information. Devices kitted out with this eavesdropping app will send images, videos, correspondence, and geolocation data from the victim’s device to a command server.
Researchers observed a drop in the number of mobile malicious installation packages detected for a fourth year running. From their peak of 8,526,221 in 2016, the number of mobile threats decreased to 3,503,952 in 2019, which is only 542,225 more than the number of threats detected in 2015.
For the third consecutive year, mobile malware attacks were most prevalent in Iran, where 60.64% of users were affected. The countries with the second and third highest percentages of impacted users were Pakistan and Bangladesh, where 44.43% and 43.17% of users were affected, respectively.
While the number of mobile ransomware Trojans detected rose by 8,186 to 68,362 year on year, one threat that was on the decline was mobile banking Trojans.
"In 2019, we detected 69,777 installation packages for mobile banking Trojans, which is half last year’s figure," wrote researchers.
However, the banking Trojans that were detected were worryingly advanced.
Researchers wrote: "The year 2019 saw the appearance of several highly sophisticated mobile banking threats, in particular, malware that can interfere with the normal operation of banking apps. The danger they pose cannot be overstated, because they cause direct losses to the victim. It is highly likely that this trend will continue into 2020, and we will see more such high-tech banking Trojans."
Nation states are actively attacking digital and internet-connected assets, but whether or not the US and other governments are doing enough to stop those attacks is a burning question that was debated in a session at the RSA Conference in San Francisco.
Sometimes there is a tendency for individuals or even organizations to question whether nation state cybersecurity attacks matter, which is something that Tom Corcoran, head of cybersecurity at Farmers Insurance Group, disagreed with. In his view, whether we like it or not, cyber space attacks matter to everyone now. To reinforce his point, he cited a famous quote attributed to Russian revolutionary Leon Trotsky at the turn of the twentieth century: “You may not be interested in war, but war is interested in you.”
What Nation States Want
The reasons why different nations engage in cybersecurity attacks are wide and varied though Stewart Baker, partner at Steptoe & Johnson LLP, summarized the key threat actors succinctly.
“The Chinese just want to steal everything, Iran is out for revenge and the Russians just want to screw us up,” he said.
Ambassador Timo Koster, ambassador-at-large, Ministry of Foreign Affairs of the Kingdom of the Netherlands, had a somewhat more nuanced view on why different countries engage in cybersecurity attacks. In Koster’s view, there is a link between the nations that attack others over the internet, and what they do to their own people.
“They are largely authoritarian regimes that have a disregard for individual and collective human rights and that is exactly what they do to other nations,” Koster argued.
Liesyl Franz, senior policy advisor, Office of the Coordinator for Cyber Issues at the US Department of State, noted that each nation state has its own motivations for attacks and that all comes into play with how the US and other governments can deter them. She also noted that there are things that the US is in fact doing to deter nation state-backed cyber-attacks.
“Over the last 18 months, we have taken progressively nimble steps to call out nation state behavior in cyber, to attribute malicious cyber-behavior, calling them out and saying why it is bad and what harm it does,” she said.
One such action occurred on February 20 when the US government publicly accused Russia of a major cyber-attack in the Republic of Georgia. Franz noted that the US government isn’t just looking to “name and shame” nation states but rather it is looking to establish a framework for responsible state behavior in the cyber-domain.
“We think that the diplomatic aspect of the public attributions we made may not work today for what happened in Georgia,” Franz admitted.
She added that the next step after public disclosure could be sanctions or legal indictments. Koster added that deterrence in cyber space is a difficult thing and there is a need to have a continuum of responses available to help influence decisions and ultimately deter nation state cyber-attacks.
With cyber-attacks, there is also a large risk from un-intended consequences, which is another challenge that governments will need to consider. One primary example of that risk comes from the NotPetya attack, which has been attributed to Russia as a specific attack against the Ukraine. The NotPetya attack, however, had a much broader, global economic impact.
“Cyber is like climate, it doesn’t stop at the border,” Koster concluded.
Registration opened for the National Cyber League (NCL) Spring Season this week.
The NCL is a biannual cybersecurity competition for high school and college students aimed at training and mentoring the next generation of cybersecurity professionals.
The NCL invites students from across the US to compete in a virtual cybersecurity competition, consisting of a series of challenges that allow participants to demonstrate their ability to identify hackers from forensic data, break into vulnerable websites, recover from ransomware attacks and more.
Players of all levels are encouraged to participate, and the NCL gives those taking part the opportunity the prepare for careers in cybersecurity and potentially real-life situations, build their skillset and gain a scouting report on their performance for potential hiring purposes.
“Our job at NCL is to give participants the best cybersecurity competition experience. We are the most inclusive, performance-based, learning-centered collegiate cybersecurity competition today,” says NCL commissioner, Dan Manson.
In 2019, more than 10,000 players competed in the NCL. Registration for the Spring 2020 season is open until March 20.
The head of London’s Metropolitan Police has fiercely defended her force’s use of live facial recognition (LFR) technology, arguing that privacy rights have changed in an age of social media and that some critics are “highly ill-informed.”
Speaking at think tank the Royal United Services Institute (RUSI), Met Police commissioner, Cressida Dick, argued that it was right for police to try and utilize technology and data more effectively than the criminal community.
She claimed that LFR is used in London in a “proportionate, limited way” which doesn’t store the public’s biometric data, and that only people wanted for serious crimes are placed on the watchlists used by such systems.
“It is not for me and the police to decide where the boundary lies between security and privacy, it is right for the police to contribute to the debate, but speaking as a member of public, I will be frank,” she continued.
“In an age of Twitter and Instagram and Facebook, concern about my image and that of my fellow law-abiding citizens passing through LFR and not being stored, feels much, much smaller than my and the public’s vital expectation to be kept safe from a knife through the chest.”
However, privacy watchdog the Information Commissioner’s Office (ICO) last November claimed it had “serious concerns” about how UK police were using LFR in practice, and that it is working on a binding code of practice related to its use in public places.
A government report, ironically issued by RUSI last September, warned that machine learning algorithms like the sort used in LFR could be amplifying racial and other human biases in policing.
It argued that “systematic investigation of claimed benefits and drawbacks is required before moving ahead with full-scale deployment of new technology.”
However, Dick confidently claimed the Met’s LFR tech was not affected.
“We know there are some cheap technologies that do have bias, but as I have said, ours doesn’t,” she said. “Currently, the only bias in it, is that is shows it is slightly harder to identify a wanted woman than a wanted man.”
Big Brother Watch released a report in 2018 claiming that LFR systems being used by the Met are 98-100% inaccurate.
Trend Micro blocked over 52 billion unique cyber-threats in 2019, 61 million of which were ransomware, according to its annual roundup report.
The security firm revealed that email remained by far the most popular threat vector, accounting for 91% of all threats. It detected 15% more email threats than in 2018, including a 5% increase in BEC detections.
Phishing detections dropped from 2018, but the number of unique Office 365-related URLs that the vendor blocked jumped 100% from the previous year.
Despite the number of new ransomware families falling by 55% year-on-year, there was a 10% overall increase in the detection of new components.
Critical vulnerabilities have always been one of the biggest sources of cyber-risk and in 2019 things escalated even further, with a 171% increase in high severity disclosures to Trend Micro’s Zero Day Initiative (ZDI).
IoT devices were also on the receiving end of a barrage of botnet-powered attacks targeting flaws in these devices, and Trend Micro also detected a massive 189% surge in brute force login attempts aimed at connected endpoints.
Trend Micro also detected a 6% increase in malicious Android apps to nearly 32 million last year, with many millions of downloads coming via the official Play store.
Trend Micro global director, Jon Clay, argued that digital transformation continues to open too many doors for cyber-criminals.
“Despite the prevalent ideals of digital transformation, lack of basic security hygiene, legacy systems with outdated operating systems and unpatched vulnerabilities are still a reality,” he added. “This scenario is ideal for ransomware actors looking for a quick return on investment. As long as the ransom scheme continues to be profitable, criminals will continue to leverage it.”
Trend Micro recommended network segmentation, regular back-ups and continuous monitoring to help tackle ransomware, alongside other best practices such as regular updates, virtual patching and tighter access controls with multi-factor authentication.
UK mapping agency Ordnance Survey has suffered a security breach leading to the compromise of data on 1000 employees, according to reports.
The government body is said to have discovered the incursion and immediately remediated the problem back in January. However, while staff and privacy watchdog the Information Commissioner’s Office (ICO) were informed, it has taken until now for the incident to go public.
It’s unclear when the breach happened, but the attacker is thought to have compromised the CFO’s email account via a phishing attack, exfiltrating payroll files, according to Verdict.
In a statement sent to the title, Ordnance Survey clarified that no customer information was compromised and its own systems remain unaffected.
“During IT security checks we identified a data breach which targeted an Ordnance Survey email account. We immediately took action and implemented a number of measures including informing the ICO,” it continued.
“Investigations have identified that some employee information has been potentially compromised. We are working with all affected employees providing advice and guidance on personal information security. As a precaution employees have been offered access to an identity fraud protection scheme.”
The ICO has confirmed that the remedial steps taken by Ordnance Survey following the incident are sufficient and it will be taking no further action.
Ashley Hurst, partner at law firm Osborne Clarke, argued that employees are still falling for phishing attacks, despite awareness-raising campaigns.
“Gone are the days where the phishing emails are riddled with typos and made from random email addresses. They are becoming increasingly difficult to spot, especially on mobile. Links can be hidden causing employees to click on them,” he added.
“A golden rule is never to type in a username or password at the request of an email unless you are 100% sure that the request is legitimate. Well-known brands simply don't make these request by email.”
At RSA Conference in San Francisco, Steve Lipner, executive director of SAFECode, reflected on some of the mistakes he has made in 50 years of working in IT and cybersecurity. In a talk he introduced as “things I wish I’d done differently,” Lipner named six instances of products and services he's been involved with.
The first was Bell-LaPadula, a model used for enforcing access control in government and military applications, which he described as “multi-level security” and enabled an administrator with top secret level classification to read unclassified files. Calling this initially a “breakthrough in building secure computer systems and encouraging organizations trying to build secure systems,” he said this was based on the Department of Defense model of information security and classification.
However, the catch was that if you were logged in at a top secret level and got a secret email and wanted to reply, you had to log out and secretly log back in, or drill a hole in the model: a scenario he described as one that “became very frequent.”
The second mistake involved VAX SVS, a virtual machine monitor for architecture. Lipner said that “nobody wanted a system that secure” and the eventual move to PCs and consumer technology left this idle.
The third mistake was around the Digital Ethernet Secure Network Controller (DESNC), as whilst he worked at DEC in the 1980s, Ethernet was adopted and DESNC was eventually dropped as the hardware was too costly and its performance too limited.
The fourth mistake was involved the Gauntlet firewall, an early application proxy firewall which was intended as protection against exploits running against the send mail client. Lipner admitted that not enough investment was made in its management, and the launch of the Check Point firewall 1 with GUI and NT support killed the project off.
The fifth mistake was inventing a key escrow system. He said that he “went down the path of building and selling packages of software tool kits,” but lessons should have been learned from Gauntlet, as if a GUI and transparent capabilities had been added, it may have succeeded. However, not doing that brought its end, whilst the government abandoning the key escrow mandate also brought an early end.
The final mistake was the ‘think like a hacker’ concept, where he encouraged all development on the next version of Windows to be stopped to wait for review. Saying this was met with skepticism, no one had done that, “so we invented what to do on the fly.”
Concluding with lessons learned, Lipner said that as well as lost investment, it was important to realize that the customer is always right, “even if they are wrong,” and it is best to decide what they want and come to a compromise.
He also encouraged usability to be considered, as if something is too complicated, users will work around it or use another product. He also encouraged delegates to push the time to market fast, as some of the ‘mistakes’ could have been pushed out in half the time that they took.
In an era when data breaches can lead to corporate losses and ruin brand reputations, cybersecurity is no longer just an IT issue, it’s a board-level issue
The question of what corporate boards should be doing and how governments can help them was the topic of a session at the RSA Conference in San Francisco, moderated by Larry Clinton, president and CEO of the Internet Security Alliance. The Internet Security Alliance publishes the Cyber-Risk Oversight Handbook, which is a guide for corporate boards on how to consider cybersecurity risk-related issues.
“The whole idea behind the guide is to basically take cybersecurity and embed it in the sorts of things that boards do and talk about, like growth, productivity as well as mergers and acquisitions,” Clinton said.
Clinton noted that fundamentally there are several key principles that the guide suggests corporate boards consider. The first principle is that boards recognize and understand that cybersecurity is not just an enterprise IT issue – it is an enterprise-wide management risk issue.
Panelist Nora Denzel, who serves as an independent board director at AMD, Ericsson and Norton LifeLock, commented that thinking about cybersecurity more holistically means that it’s not just thought of as a cost center. Rather, cybersecurity is an enterprise-wide strategic risk that has to be managed.
Stefan Becker, head of the Private Sector Office for the German Federal Office for Information Security, said that the idea of looking at cyber-risk more holistically is one that resonates in Germany too.
“We can’t just think about cybersecurity as being about agencies or any one department,” he argued. “Everyone in an enterprise has to think about cybersecurity – that’s the key
to improving business impact.”
Boards Need to Work With IT Management
Another key principle outlined in the handbook is that it’s critical that corporate boards work with enterprise IT management, which is a point that was emphasized by Daniel Kroese, acting deputy assistant director at CISA.
“In the handbook, it states that it is incumbent on the decision makers and the places of authority in organizations to develop a full enterprise risk management cyber-framework, where the governance structure, the accountability, the people, processes and resources are abundantly clear,” Kroese explained.
By having that framework, Kroese said that it’s possible to dispel the myth that cybersecurity risk cannot be quantified. While it might be hard to get an exact number, he argued that, with a framework, accountability and risk can be managed.
Part of managing risk is being aware of threats, which is where the US government is playing a role. Kroese noted that CISA has information sharing programs to help corporate boards and executive management make strategic decisions about cyber-risk. Since the government has a broader view, it can also help identify areas of systemic risk, where risk spans multiple organizations and even industries.
The Human Element and the Role of the CISO
During the question and answer session that followed the panel, a member of the audience asked what CISOs should do to help the board.
Denzel commented that she tries not to ‘rough up’ the CISO, because she knows there is a labor shortage in cybersecurity. Rather, she said the boards she’s on prefer to give the CEO a hard time.
“Part of your role is to educate us,” she added. “Most boards don’t have a tech background.”
At RSA Conference in San Francisco, RSA’s Ankush Baveja made a case for a SOC effectiveness framework
SOC effectiveness is hard to measure without a valid framework, argued RSA’s presales engineer, Ankush Baveja. “Senior executives and senior leadership teams don’t get to see the results of what the SOC team is doing.” The solution, he argued, is simple: Create a framework to showcase a SOC’s maturity.
“You need to identify the SOC capability and link that to metrics. The metrics then have to link to the outcome,” explained Bajev. “Set your objective and within those you can have multiple goals. Further, develop questions around each goal to help you to identify the current state of that goal.” Your metrics must be something that is actionable, he added.
“Link your business objectives to the goals you are trying to achieve,” continued Baveja, who listed operations, engineering and IT and the blue and red team as the ‘SOC Capability Triad’.
RSA’s Baveja set out the following six-month plan for choosing and auctioning a SOC metrics framework:
- First, choose a framework and download and use the framework sheet
- Define capabilities for your SOC – both current and a roadmap
- Identify metrics for each capability and use the GQIM methodology
- Define how these measurements affect your decisions
- Define stakeholders and assign ownership to monitor/alert
- Create your SOC Dashboard
- Set periodic checkpoints to review the goals
- If “A” metric doesn’t add value or lead to a decision, dump it
Finally, Bajeva echoed the opening remarks of Rohit Ghai by warning information security professionals against focusing on the most catastrophic threats, and instead prioritizing the most likely threats. “Look at the most likely threats to your organization and build your content around that,” he concluded.
Shodan is a well-known security hacking tool that has even been showcased on the popular Mr. Robot TV show. While Shodan can potentially be used by hackers, it can also be used for good to help protect critical infrastructure, including energy utilities.
At the RSA Conference in San Francisco, Michael Mylrea, Director of Cybersecurity R&D (ICS, IoT, IIoT) at GE Global Research, led a session titled "Shodan 2.0: The World’s Most Dangerous Search Engine Goes on the Defensive," where he outlined how Shodan has been enabled to help utilities identify risks in critical energy infrastructure. Shodan, to the uninitiated, is a publicly available search engine tool that crawls the internet looking for publicly exposed devices.
Mylrea explained that utilities are often resource constrained when it comes to cybersecurity and are typically unaware of their risk. In recent years, there have been a number of publicly disclosed incidents involving utilities. To help solve that challenge, Mylrea proposed a project to the US Department of Energy (DoE) to enhance Shodan for utilities so they could use the tool to find risks quickly.
The initial response from the DoE was that they didn't want to invest in a hacking tool. Mylrea's team responded that adversaries don't need Shodan to find vulnerabilities and have their own tools already. An initial proof of concept was also conducted that was able to find vulnerable utilities, which convinced the DoE to move forward on the effort.
"Cyber-threats are evolving faster than systems defenses," Mylrea said. "Bad configuration and asset management leaves devices vulnerable and exposed."
Over a period of a year the publicly available version of Shodan has been enhanced with features to help improve the identification of vulnerable energy utilities. A private version of Shodan has also been developed just to help small utilities.
"Utilities need to really understand how to prioritize their resources in order to reduce risk," Mylrea said. "Shodan is a great way to quickly understand what's publicly exposed and vulnerable, so you can prioritize those resources and take steps to secure those critical cyber-assets."
As part of the effort to improve Shodan for utilities, a simple pull-down to enable search queries to identify exposed energy delivery systems was added. To help find those systems, Shodan was updated with new energy-specific protocols, ports, and vulnerabilities. Mylrea noted that improved visualizations and mapping were key parts of the effort making it easier for utilities to understand risk.
Recommendations and Lessons Learned
Through the process of surveying utilities and understanding their deployments and needs, there have been multiple lessons learned by Mylrea.
His top recommendation is for utilities to have a communications and recovery plan, so that if an incident occurs staff know how to react. He also recommends that utilities not run what is known as a "flat network," where everything runs in the same network segment. Rather, he suggested that utilities should run segregated networks where operational and IT technologies are separated and secured from one another.
Perhaps most importantly though, Mylrea advised the audience that utilities should use Shodan, setting up automated queries to search for bad configuration, exposed services, and potential vulnerabilities.
Speaking at RSA Conference in San Francisco on the subject of “Leading Change: Building a Security Culture of Protect, Detect and Respond,” Lance Spitzner, director of SANS Security Awareness said that we often talk about security culture and the capabilities of the human, but fail to “humanize security.”
Spitzner said that the term “you cannot just patch stupid” frustrates him, as the human is a part of cybersecurity. While advancements have been made to improve the security of technology, he noted, we have not done the same for the “human operating system.” He said: “We’ve gotten so good at technology and securing technology that we’re driving bad guys to target the human.”
Citing Sir Isaac Newton’s theory of an object stays at rest until a force is applied, Spitzner said that in the case of the human factor “we need to apply force to human.”
When it comes to education, Spitzner introduced two types of people, who he referred to as subject one (Homer Simpson) and subject two (Mr Spock). He said that the industry focuses too much on “subject two” (Mr Spock) - people that are logical and data driven - "and we build initiatives based on the concept of subject two, because this is how we think.”
Subject one, however, is not analytical or data driven, and Spitzner said that logically it makes sense not to engage them in too technical an education as to do so is “time and calorie intensive.” Therefore we need to concentrate on designing usable concepts for subject one.
Spitzner said that humans are very emotional and if you roll out technology you “need to make it as simple as possible [because] people are not lazy or stupid but security is not their job.”
Citing the issue of rolling out a password refresh policy, he said that typically when this happens we “jump on it and talk about the top ten most common passwords and make fun of the users and we blame people.” However, the blame should be put on ourselves, he argued, and we must look to try and make the process more simple.
He recommended removing password expiration, and killing complexity to switch to allow passphrases. He also recommended providing tools such as password managers, “which are not perfect, but better than what we’re doing now.”
He said: “So next time you’re dealing with something, ask if you can eliminate it, simplify it, or replace it with tools or technology. We want people to do things, and make it as simple as possible.”
"For any security imitative or culture, it is not just about securing the human, but about humanizing security. In the last 20 years, we have got good at technology, but forgotten how to enable it.”