Data Privacy

Data Privacy

WhatsApp: skepticism of remote communication tools is healthy

Every day, as a remote, distributed workforce, employees are busy firing off communications between themselves, customers, other businesses, and the board…  Communications are the nodes that move plans into actions, ideas into growth, leads into customers. Digital communications, simple and reliable as they are, turn our communications into pockets of permanence. This essential nature of communications makes them a direct target for attack—every time information is communicated, it becomes a potential vulnerability; a record to be accessed. Consider Slack channels, for example. While Slack channels may feel like behind-closed-doors conversations, they’re actually a record of that conversation. It’s like having every room in the office mic'd up and set to tape—there’s a record of it somewhere, even if Slack’s cloud is (hopefully) harder to access than a pile of tape behind a locked door. Permanence. In Slack, companies can, however, set file-retention policies that expire after a set amount of time to solve for this. Do all corporate communications tools have such policies? More pertinently, what’s the actual privacy policy of each of those tools? And so we come to
WhatsApp’s terrible new year. A poorly communicated update (ostensibly designed to enable better communications channels for B2C interactions) has caused mass introspection amongst users about how their metadata is handled by WhatsApp (or more accurately, how their metadata might be shared with Facebook, which owns WhatsApp). Many users have decided they no longer trust WhatsApp to maintain their privacy, and have sought  what they perceive as more trustworthy alternatives. Signal and Telegram have reported a massive uptick in active users in the past couple weeks—so much in Signal’s case that it temporarily went down under the weight of all the new users. WhatsApp has since paused the update and scrambled to fix the damage, but convincing the public to reinterpret the subtleties of their privacy policy might be a lost cause.  Given that it’s the CSO/CISO’s job to inspect third-party privacy policies, is the business world undergoing a similar introspection about their communications tools? According to research conducted by Pulse, most IT executives aren’t using encrypted messaging to communicate confidential information. When asked, “What tool(s) do you use to communicate confidential information?” the highest response by far was email (42%), with relatively few mentions of encrypted email services (such as ProtonMail) (11%) or encrypted messaging apps (6%). That sound you can hear right now? That’s the collective shudder of CISOs around the world. CISOs have a fight on their hands: to convince the business of the necessity of secure communications. It’s a fight on multiple fronts, as CISOs must also scrutinize vendors’ security protocols. The pressure can yield results—Zoom has made great strides to address security concerns raised by businesses and consumers (here’s a regularly updated history of Zoom security flaws and fixes).  Remote work was forced upon many of us. We’ve all had to adapt and learn new tools. And the onus is on each of us to understand the tools we use—whether that be for the sake of our business or our own privacy. Ultimately, when it comes to cybersecurity, communication isn’t just about what you say, it’s also about the medium in which you say it. The medium we choose is up to us.

Are you concerned about the security of your business communications?

Top Answer : Not much, we avoid using unsanctioned channels and keep a rather conservative approach (email)

12 views
1 comments
1 upvotes
Related Tags
Is more data always better?  Are companies collecting an unnecessary amount of consumer data?

Top Answer : The core of a CSO role is effective risk management and risk tolerance: business empowerment and risk tolerance alongside it. And I think if the perception around data is simply “more is better, more is more, the end,” it becomes difficult to have an informed risk tolerance around the acquisition of data. If there's no liability and there's a touring test kind of analog that has to happen in the courts for AI and ultimately the company behind it to be rendered liable, I look at this and I think, "What is legal's responsibility here for the implementation of AI?" And then we as cyber practitioners, if there is no liability behind certain algorithmic implications what then, for the larger status? And how do you form risk tolerance atop that.

34 views
6 comments
2 upvotes
Related Tags
If these companies were affected then the foundation of computing could be at risk. If you could manipulate at the hardware layer via the firmware, BIOS, ect then a threat actor could weaponize well below the operating system which brings in to question the integrity of the entire computing stack and everything above it.  The firmware and bios are like the rebar and concrete for a building. If that foundation is weak then the entire structure and anything dependent on it is at risk. We cannot underestimate the potential or the severity of these companies being potentially affected by the SolarWinds hack and what that means for the foundational computing hardware they provide to the world.  What do others think ?  How could this impact your organization ?   Big tech companies including Intel, Nvidia, and Cisco were all infected during the SolarWinds hack - The Verge

Top Answer :

With an increased dependence on the cloud, is privacy still a consumer concern?

Top Answer : I kind of vacillate between desperation and hope. If you would have asked me if consumer privacy was still going to be a topic 10 years ago, I would have been despondent. I would have been like, there's no way that people are going to be concerned about privacy. They give it up every time they turn around, nobody does anything with privacy in mind, nobody will ever pay for privacy. Now, if you look at the dialogue, there's a proposition on the voting ballot for November 3rd to expand CPRI. I mean, it makes me super happy because consumer privacy is still a concern. I suspect there's going to be special interests certainly, privacy is always going to be a concern for intelligence interest, nation/state interests. There's always going to be some group of people who are very, very concerned about privacy, and the larger that group is the more likely it is that consumers will have good privacy alternatives. I'm super happy that Apple is doing things that they're doing to try to curtail government snooping.

21 views
4 comments
1 upvotes
Related Tags
Edge computing entails so much infrastructure with privacy/security implications...how worried should consumers be?

Top Answer : I almost wonder if the end user will not care about personal infrastructure at all one day and only certain businesses will care; The end user, it seems, will just have devices. I'm kind of sitting here going, I'm going to somehow have facial detection on the servers in my garage because I'm worried about people accessing the data. And it's kind of, at some point, the trade-off is between, “do I upload it with someone's cloud infrastructure, or do I build my own?” And at some point, I don't think I'll be able to.

How worried are you about privacy issues due to IoT?

Top Answer :

284 views
0 comments
0 upvotes
Related Tags
Are there any ethical concerns you have about implementing AI/ML?

Top Answer : I'm in healthcare, and what we're trying to do is develop drugs. Right now it's like a 10 year cycle, so we're trying to find a way to speed that up. There's a lot of ethics involved in AI/ML on what you can access when you're swallowing sensors. What kind of data can you pull? Is it going to be PHI? Is it going to be HIPAA data? Things like that. What kind of data do you want to release to your doctor, if you're all sensitized, because pretty soon we're going to be poking sensors all over our bodies that do a great number of things. So that's where we have to be careful.

What are your top IoT privacy concerns?

Top Answer : For us at GitLab we’re all remote and one of our biggest obstacles is to put what you’d normally consider basic security measures in a traditional brick and mortar world. We're in the process of getting an MDM product installed on laptops. Because of the very nature of open source and DevOps, a majority of the company is thinking about, “What data are you collecting and what is your responsibility around that data?”  With WFH, you're not just getting the IP address of their personal work laptop.  There’s the potential you’re getting other information on things in their environment. It's that real mashup of personal and work environment that makes it tricky.  How do you strike that balance, especially when you're trying to sell in the enterprise and you’ve got certain companies demanding you do certain security measures and track certain things? I don't have an answer. It is a really complicated problem and I think it’s just going to get worse. But there's going to be an opportunity for new solutions to help manage this stuff.   For example, cloud data products like DataBricks. It’s only going to get more and more complex, especially as these IoT devices have more compute power and become capable of doing so much more.

16 views
4 comments
1 upvotes
Related Tags