Dave Bittner: [00:00:03] Hello everyone, and welcome to the CyberWire's Research Saturday, presented by Juniper Networks. I'm Dave Bittner, and this is our weekly conversation with researchers and analysts tracking down threats and vulnerabilities, and solving some of the hard problems of protecting ourselves in a rapidly evolving cyberspace. Thanks for joining us.
Dave Bittner: [00:00:25] And now a word from our sponsor, Juniper Networks. Organizations are constantly evolving and increasingly turning to multicloud to transform IT. Juniper's connected security gives organizations the ability to safeguard users, applications, and infrastructure by extending security to all points of connection across the network. Helping defend you against advanced threats, Juniper's connected security is also open, so you can build on the security solutions and infrastructure you already have. Secure your entire business, from your endpoints to your edge, and every cloud in between, with Juniper's connected security. Come see Juniper at RSA 2020 in booth 6161 to see why NSS Labs says Juniper is back in security. And we thank Juniper for making it possible to bring you Research Saturday.
Dave Bittner: [00:01:19] Thanks also to our sponsor, Enveil, whose revolutionary ZeroReveal solution closes the last gap in data security: protecting data in use. It's the industry's first and only scalable commercial solution enabling data to remain encrypted throughout the entire processing lifecycle. Imagine being able to analyze, search, and perform calculations on sensitive data, all without ever decrypting anything – all without the risks of theft or inadvertent exposure. What was once only theoretical is now possible with Enveil. Learn more at enveil.com.
Rami Puzis: [00:01:59] So, we discovered a kind of weakness in online social networks.
Dave Bittner: [00:02:04] That's Rami Puzis. He's an assistant professor at Ben-Gurion University. The research we're discussing today is titled "The Chameleon Attack: Manipulating Content Display in Online Social Media."
Rami Puzis: [00:02:17] This is a feature that can be misused by an adversary to perform a few different kinds of scams through the social networks. So, a user could be fooled to interact with some content on social media that can be switched later on to a different display, a different visual representation, which actually seems to have absolutely different content. So, you can like – as we say in some of our publications – you can press "like" on the cute kitty and a day after it can be switched to a movie of some terrorist organization.
Dave Bittner: [00:03:03] I see. And as the user, you would have no idea that this change had happened behind the scenes.
Rami Puzis: [00:03:10] Currently, as it is implemented in the social networks, no, you would not. Because social networks do track changes to the posts and they do display notifications if the post is edited, but through this feature, which can be misused by an adversary, the actual post is not changed – only the way it is displayed to the user.
Dave Bittner: [00:03:35] Well, let's walk through it together. Describe to us what exactly is going on here. What are these people doing to make this work?
Rami Puzis: [00:03:42] So, these people post a link to a website. It can be a website they own, it can be a redirection link, some kind of link shortener service – anything that allows to change the target of the post, the eventual target of the link being posted. On Facebook, they follow the redirection until the final destination, and from that final destination, Facebook extracts a title, the preview image, and the short description of the website.
Dave Bittner: [00:04:19] Hmm.
Rami Puzis: [00:04:20] Assume it's some YouTube movie. The link is posted on Facebook. Users can comment, like, interact with this post any way they like. Later on, the user who posted the link may change the destination of this link to point to a different web resource or change his own website to display something different, and ask Facebook through the application interface, through their services, to refresh the link preview.
Dave Bittner: [00:04:58] Now, when you describe the attack in your research paper here, you align the different phases of a chameleon attack to a standard cyber kill chain. Can you walk us through those phases?
Rami Puzis: [00:05:10] To walk through the phases of the standard cyber kill chain, we need to assume first some target of the attacker. Since we have a few different kinds of attacks, let's start with a basic one – let's say shaming. If an adversary would like to discredit some political figure or anyone else on the web, they should first collect some intelligence about this figure, with what kind of posts this figure interacts, what kind of posts he likes or comments. And then they put a post with a link to a resource that looks appealing to the person that will be discredited later on. Of course, they need to attract the attention of that specific person, but this is done using the usual techniques – either social engineering or just targeted marketing.
Rami Puzis: [00:06:10] Once they have the attention of this person and some interaction with him – in the form of comments, for example – then the chameleon post can change the way it is displayed and reveal its true self by pointing now to a different web resource and then also refreshing the link previews so it will look like it always has pointed to the different illegal or some kind of bad web resource. Then you can attract public attention, make screenshots of that person liking something he should have never liked.
Dave Bittner: [00:06:56] Hmm. Now, what are some of the ways that you're seeing this deployed? What are some of the uses for it? You just talked about shaming someone – what are some of the other things that it's being used for?
Rami Puzis: [00:07:09] So, you can use it for more trivial things, like a promotion or some kind of commercial misuse cases. For example, one could post a link to a well-known famous web resource, collect likes, collect comments, collect social capital, and then switch this already promoted post to point to a different web resource, including different preview and different display, which will inherit all the social capital collected by the old post.
Dave Bittner: [00:07:48] Which social networks are susceptible to this, and to what degree do each of them allow this sort of thing to take place?
Rami Puzis: [00:07:56] So, Facebook is the first one. On Facebook, only the owner of the post can modify the way it is displayed and refresh the link preview cache. If the post is being shared by some user, then the shared posts are no longer affected by this manipulation. Only the original one. And no other user can manipulate the way the post is displayed.
Rami Puzis: [00:08:26] On LinkedIn, anyone can change the way a link is previewed, can refresh this cache. Of course, in order to change a display to what the adversary would like it to be, the adversary needs to control the link. So, if I am adversary and I can make you post my links on LinkedIn, then later on I can change the web resource to which these links lead, and ask LinkedIn to refresh the preview of these links. So, all posts that you have posted on LinkedIn with my link will now show something different.
Rami Puzis: [00:09:14] The last one is Twitter. Twitter generally does not allow editing tweets. Once you tweet, you tweet. You cannot modify the content of your tweet. Twitter, similar to LinkedIn, allows anyone to request a refresh of the link preview. If I'm adversary and I can change the final destination of my links, then I can ask Twitter to refresh the display of these things, the way they are previewed. And anyone who tweeted this link, his tweets will now look different.
Dave Bittner: [00:09:57] Hmm. Now, one of the things you outline in your research is an experiment that you all did. You set up some things on Facebook looking to evade censorship in some Facebook groups. Can you walk us through – what did you do here?
Rami Puzis: [00:10:12] Yes, we identified several moderated groups. In this case, sports fans. The groups were all split into fan groups of rival teams. For example, Arsenal versus Chelsea. Then we created several Facebook pages, some of which were such chameleon pages. We did not use profiles for this experiment in order to comply with Facebook user license agreement, their regulations. Using these pages, we first tried to enter the group, having the page displaying posts with movies of rival team. For example, a page with a movie of Chelsea player trying to enter a group of Arsenal fans. Of course, in most cases, it was denied.
Rami Puzis: [00:11:15] Then a week later, the same page changed the way it looks like – its chameleon page. So it adapts to the new fan group, and all the movies are now supporting the right team and we tried to apply to the same group again. And of course, the pages were accepted this time.
Dave Bittner: [00:11:40] I could see this going the other way, where you could post things that were attractive to the members of the group, and then after the fact change it to something that was controversial, and that's one of the things you describe here. That's not what you did in your test.
Rami Puzis: [00:11:54] Yes, of course, we didn't do that. We did not interact with any members of the group. We did not post at them. We did not comment or any way interacted with human accounts. Again, in order to apply with Facebook rules and also with the University Ethical Committee requests. In very few cases, we did interact with the group moderators, since we had to answer their questions. And by the end of the experiment, we notified all the group owners that the experiment took place and all its consequences.
Dave Bittner: [00:12:39] Now, what are your recommendations for folks to mitigate this?
Rami Puzis: [00:12:43] Well, the mitigation is first by the social networks themselves. For Facebook and Twitter, this is a very easy tweak to do, because both networks already maintain a link inspection service. They have URL blacklists and they do mark websites as suspicious and so on. So, it is easy, very easy for them to display some notification that the link preview was changed and also maintain a history of these changes, the same way that Facebook maintains a history of changes to the post.
Rami Puzis: [00:13:26] For LinkedIn, it will be a little bit harder, because currently they do not use their own link shortener service, but they can also track any changes performed to the link previews using their service. And in this case, they will be able to display a notification.
Rami Puzis: [00:13:49] For users, just watch your likes and use them with caution. Like and comment only on links and posts that you trust. And you know, that's the usual recommendation to anyone to be afraid of phishing attempts or any social engineering scam.
Dave Bittner: [00:14:15] Is there anything in particular for group moderators – some things that they can look out for?
Rami Puzis: [00:14:22] That's a tough question. A user that would like to make an investigation and inspect a profile or a post, he can use the social network APIs to see the history of changes to the link previews. Now, if the chameleon post was never activated so far, they will not see such a change. They will only see its first initial disguise and it will be hard to anticipate if it will ever change. On the other hand, if you see that the link that was posted leads to some IP address rather than well-known domain, that's a suspicious indication in the first place.
Dave Bittner: [00:15:15] Our thanks to Rami Puzis for joining us. The research is titled, "The Chameleon Attack: Manipulating Content Display in Online Social Media." We'll have a link in the show notes.
Dave Bittner: [00:15:30] Thanks to Juniper Networks for sponsoring our show. You can learn more at juniper.net/security, or connect with them on Twitter or Facebook.
Dave Bittner: [00:15:38] And thanks to Enveil for their sponsorship. You can find out how they're closing the last gap in data security at enveil.com.
Dave Bittner: [00:15:46] The CyberWire Research Saturday is proudly produced in Maryland out of the startup studios of DataTribe, where they're co-building the next generation of cybersecurity teams and technologies. Our amazing CyberWire team is Elliott Peltzman, Puru Prakash, Stefan Vaziri, Kelsea Bond, Tim Nodar, Joe Carrigan, Carole Theriault, Ben Yelin, Nick Veliky, Bennett Moe, Chris Russell, John Petrik, Jennifer Eiben, Peter Kilpe, and I'm Dave Bittner. Thanks for listening.