You are currently browsing the category archive for the ‘wireless’ category.

I was just reading the newest post by Google’s Project Zero. They just released a report on a massive bug that allows remote code execution by exploiting a vulnerability on the 802.11 Broadcom SoC used in most smartphones.

Actually, the bug is not massive (it is, after all, just a simple buffer overflow because boundaries are not well checked when processing a specific type of packet), but its consequences are massive indeed. The vulnerability is specific to the parsing of certain messages in 802.11z TDLS, a mode of P2P ad-hoc communication. The report published by Gal Beniamini is just the first part of the overall project, and it “just” shows up to remote code execution on the Broadcom wifi SoC, but it hints that it can be leveraged to gain remote code execution ability in the application’s processor:

In the next blog post, we’ll see how we can use our assumed control of the Wi-Fi SoC in order to further escalate our privileges into the application processor, taking over the host’s operating system!

Long story shirt, this vulnerability results in a massive vulnerability. Theoretically (I am eager to see the second part of this report!), an attacker can take over a smartphone’s OS by simply sending malformed WiFi frames, achieving full device takeover by WiFi proximity alone. The good news is that this bug has been patched already both for iOS devices and Android devices, so I’d say you go ahead and update your mobile’s OS if you haven’t in a while.

I strongly recommend folks to read the report by Gal Beniamini, as it is excellently written and easy to understand and follow. It’s actually a great reference/introduction to buffer overflows and how to leverage them for malicious intent. The overall exploit is rather complex, but very nicely explained step by step in the report.

Fun stuff!

I was reading this morning a new paper on the topic of LTE IMSI catchers:

Mjølsnes, Stig F., and Ruxandra F. Olimid. “Easy 4G/LTE IMSI Catchers for Non-Programmers.” arXiv preprint arXiv:1702.04434 (2017).

Although this is old news, it is exciting to see that the recent discovery and implementation of LTE IMSI catchers by the team of Prof. Seifert at TU Berlin (Oct 2015 – has sparked the interest in this area. The paper also mentions the DoS threats that were introduced by the same team in [1]. I have done some work and implementation of LTE IMSI catchers and the DoS exploits myself in the past as well ([2] and [3]).

I was giving a talk on this topic last week at UC Irvine, trying to encourage graduate students to focus their PhD research in this area as there is still a lot of work to be done. We need the talented minds of graduate researchers to come up with new threats and, more importantly, solutions to these threats.

Back to this new paper, it is a great overview of IMSI catchers and it is great that the authors implemented the IMSI catcher using an alternative tool (Open Air Interface). I found interesting, though, that they state that implementing an IMSI cather on openLTE requires source code modification such that it is not a viable option for “non programmers”.

Although the claim of their implementation being for non-programmers is obviously correct, their LTE IMSI catcher uses very similar software and the same computing equipment as the ones in [1,2,3]. I would argue that adding 3 lines of code to openLTE is something a non-programmer could do as well. This is what the authors of [1] did. The only modification required at openLTE (as I have explicitly stated at every talk I have given) is mostly to add an fprintf statement where openLTE parses the AttachRequest message or the TAU/LocationArea Update message. Although one can do slightly fancier things.

Anyhow, maybe I am too optimistic and expecting a non-programmer to add an fprintf statement in openLTE is perhaps asking too much 🙂

Regardless, this new paper is great and very interesting and an excellent reference on this topic. I am wondering if they will be presenting their work at a conference soon?

I look forward to more and more research in this area.

[1] Shaik, Altaf, et al. “Practical attacks against privacy and availability in 4G/LTE mobile communication systems.” arXiv preprint arXiv:1510.07563(2015).

[2] Jover, Roger Piqueras. “LTE security and protocol exploits.” ShmooCon (2016).

[3] Jover, Roger Piqueras. “LTE security, protocol exploits and location tracking experimentation with low-cost software radio.” arXiv preprint arXiv:1607.05171 (2016).

Authentication in mobile networks is executed leveraging a symmetric key system. For each mobile subscriber, there is a secret key that is known only by the mobile device and the network operator. Actually, it is not the device itself holding the key, but the SIM card. On the network side, in the case of LTE, the secret key is stored in the Home Subscriber Server (HSS).

Based on this pre-shared secret key, a mobile device and the network can mutually authenticate itself. Though, this is not necessarily the case. For some reason someone must have thought, when designing 2G-GSM, that having the end point authenticate the mobile network was not a requirement… too bad that not having mutual authentication opens the door to all types of rogue base station MitM attacks. Bad things also happen when this pre-shared “secret” key is sent from the SIM card manufacturer to the mobile operator in the clear in a bunch of DVDs and someone manages to steal them.

After years or security research in mobile networks, identifying, implementing and testing protocol exploits, I started thinking that perhaps it would be a good idea to transition the security architecture of a mobile networks towards a PKI-based system. This is why I really enjoy reading research papers with PKI proposals for mobile networks, which is a rather rare topic in the research community. Thanks to Google Scholar, a very interesting paper showed up in my radar: Chandrasekaran, Varun, and Lakshminarayanan Subramanian. “A Decentralized PKI In A Mobile Ecosystem.

PKI would increase the complexity of each cryptographic operation, but it is not like device and network authenticate each other constantly. Definitively, a lot of research would have to be done to validate whether it would be possible.

With a PKI-based authentication architecture in mobile networks, so many cool things could potentially be done. For example, it is very well understood that, regardless of mutual authentication and strong encryption, a mobile device engages in a substantial exchange of unprotected messages  with *any* LTE base station (malicious or not) that advertises itself with the right broadcast information (and this broadcast information is transmitted in the clear in the SIB broadcast messages). And this is the source of a series of protocol exploits and attacks. Perhaps, by means of PKI, broadcast messages could be “signed” by the operator in a way that mobile devices could verify their freshness (to avoid replay attacks) and verify that the base station is legitimate. This would allow mobile devices to verify the legitimacy of a base station before starting to engage in RACH procedures, RRC connection establishments, NAS attach exchanges, etc.

Anyhow, very interesting paper on cool things that could be done applying PKI to mobile networks. Worth reading it.


(Yes, after months? maybe years? I decided to get back to being somehow active on my blog… Most likely I’ll just be posting about security and wireless/mobile interesting stuff)

I was reading this morning a very cool paper from a team at MITRE implementing a jamming mitigation engine leveraging beamforming. The idea is to generate a null in reception at the direction from which the jamming signal is coming from.

Link to the paper:

It is very interesting that this type of jamming mitigation is becoming popular. It is an area with a lot of potential, specially in the context of 5G, communication at mmWaves and massive arrays of antennas.

My former team and I worked in a very similar idea in the past. We both implemented a beamforming-based mitigation for radio jamming against LTE (details in this paper) and there’s a bunch of patents already public about that technology: beamforming at the eNodeB and beamforming at the UE. In the case of the UE, we also used beamforming to increase the capacity and throughput of the system… a bit of a utopian idea that, actually, now makes much more sense with carrier frequencies in the mmWave range and above and massive arrays of antennas in the context of 5G. I strongly recommend to read Prof. Rappaport’s work in this area for more details.

Anyhow, the paper is VERY interesting and presents some exciting area in this area.

I recently was contacted by someone with questions regarding a document I wrote (LTE PHY fundamentals) a few years ago as part of a class at Columbia University and that is hosted on my website. The confusion was regarding Doppler shift and the time separation of the reference signals in LTE.

Quoting the message:

I was trying to tell you that 500 km/h does not mean a Doppler shift that you wrote in your document. If the carrier frequency is low and the receiver is moving through the transmitter Doppler shift will be zero cos(90).

Please read the LTE documentation carefully: Universal Mobile Telecommunications System (UMTS); LTE; Requirements for Evolved UTRA (E-UTRA) and Evolved UTRAN (E-UTRAN). In chapter 7.3, it is clearly written that this speed can be from 15 to 120 in the best case with a Doppler shift, not 500 as you wrote and even calculated the Doppler shift.

After responding to the question, I thought that it would be a good idea to write a quick post here and reference it from my website to clarify this topic if other people had the same questions.

The 3GPP standards do account mobility of up to 500km/h. Checking ETSI TR 125 913 V9.0.0 (Universal Mobile Telecommunications System (UMTS); LTE; Requirements for Evolved UTRA (E-UTRA) and Evolved UTRAN) one can read:

The E-UTRAN shall support mobility across the cellular network and should be optimized for low mobile speed from 0 to 15 km/h. Higher mobile speed between 15 and 120 km/h should be supported with high performance. Mobility across the cellular network shall be maintained at speeds from 120 km/h to 350 km/h (or even up to 500 km/h depending on the frequency band). Voice and other real-time services supported in the CS domain in R6 shall be supported by EUTRAN via the PS domain with at least equal quality as supported by UTRAN (e.g. in terms of guaranteed bit rate) over the whole of the speed range. The impact of intra E-UTRA handovers on quality (e.g. interruption time) shall be less than or equal to that provided by CS domain handovers in GERAN.

The mobile speed above 250 km/h represents special case, such as high speed train environment. In such case a special scenario applies for issues such as mobility solutions and channel models. For the physical layer parametrization EUTRAN should be able to maintain the connection up to 350 km/h, or even up to 500 km/h depending on the frequency band.

Regarding this topic, Samsung did some very interesting experiments on the high speed case inside a plane flying at 750km/h. Also, a recent paper was presented in a Sigcomm workshop that I was part of the TPC committee. It presented high speed measurements of LTE (check the paper titled “Performance of LTE in a High-velocity Environment: A Measurement Study”).

As for the Doppler shift, the Doppler equation does contain a cos(alfa), but alfa will only be 90 degrees when a mobile is under the cell tower, In general, in mobile communications, one does not consider the special case of alfa=0 (see below for more details). Anyhow, the way system specifications are designed is for the worst case scenario. In the case of LTE, the maximum possible doppler shift is for the highest carrier frequency (~2GHz at the time I wrote the document), V=500km/h and alfa=0 (cos(0)=1). That’s why the separation of the pilot tones in the LTE/OFDMA lattice is 0.5ms (the derivation of the value 0.5ms is in my document). Essentially, the Doppler shift defines the coherence time, which is the duration of time for which the channel does not change “substantially” or, more mathematically defined, the delay for which its autocorrelation is “higher” than a certain value (there is different ways to define coherence time depending on how “strict” one wants to be). Pilot tones or reference signals are used to sample the channel to perform equalization and other tricks. The Doppler shift defines the maximum sampling period that will allow to sample the channel correctly. If the channel can change as fast as every 0.5ms, one needs to have one sample at least ever 0.5ms. Therefore, the reference signals are separated every 0.5ms, tackling this way the worst case scenario for the coherence time.

Generally, in wireless communications for terrestrial applications, one usually does not even consider alfa because the heights of the towers (10 to 50m or so) are much smaller than the distances between the mobile devices and the towers (up to 35km for the biggest supported cells), so the value of alfa is always very small. However, in radar applications they do consider alfa because planes are flying at high altitudes.

Anyways, the best way to read about this concepts and have them explained much better than what I did here, is to check Rappaport’s book.


Check this IEEE ComSoc tutorial on Advances in Coordinated Multi-Cell Multi-User MIMO Systems. Free of charge for a limited time.

Advances in Coordinated Multi-Cell Multi-User MIMO Systems


I recently read a very interesting paper that discusses one of the coolest wireless comm-related projects I have seen around for a while. A team of researchers from University of Washington presented this paper at Sigcomm this summer in Hong Kong. The paper was, by the way, awarded with the best paper award.

The idea is simple but could lead to a whole new technology with a great spectrum of applications. These researchers have designed a simple communication system that operates with no need for battery or power. Essentially, the nodes use the signals that are transmitted around them (for example TV signals) and modulate them in such a way that they are able to communicate with each other. Essentially, is a further step more from, for example, RFID tags that use the power of the transmitted signal to power themselves and send a reply. Although ambient backscatter (which is the name the authors give to this technology) has very short range, it could potentially be used for multiple applications, including certain types of wireless sensor networks.

The ambient backscatter project website presents the following video. One can observe the huge antennas these small devices have, which indicate that they operate at not too high frequencies and give an idea of the very low power they operate at. No room for inefficiencies of small twisted or patched antennas. They keep it simple for now with a dipole tuned at the wavelength.

By the way, this project is lead, among others, by Prof. Shyam Gollakota. He was recently awarded with the prestigious ACM Doctoral Dissertation Award and shortly after got a position as Assistant Professor at the University of Washington. Good stuff.


While browsing more stuff for this post I found about another project from Gollakota: WiSee. Again, really cool stuff. They are using subtle variations in wireless signals when a person moves to do gesture control. Very interesting. Check it out here.

I recently read a very interesting and detailed article that a colleague at work recommended. The article presents a very thorough overview of the latest revolution in consumer electronics combined with wireless communications: the Internet of Things (IoT).

The concept of the IoT defines a (near) future scenario where most (if not all) things on our physical world and lives will be interconnected with each other using all kinds of wireless protocols, such as WiFi, ZigBee, ZigWave, etc. On top of this myriad of interconnected sensors and actuators, a new playground for developers and people with ideas will be ready for new services (and even entire businesses) to be created, all following a similar “mobile OS – app” scheme. And all these new services will be based, according to the article’s author, on simple “if – then” rules:

If  the sun hits your computer screen, then you lower a shade. If  someone walks in the door, then you turn down your music. If  there’s too much noise outside, then you close your window. If  you have a Word document open but haven’t finished writing a sentence in 10 minutes, then you brew another pot of coffee.”

But all these cool new applications will result on new challenges. One of them (the main one, according to the author), will be battery and wireless charging technologies. Indeed, while semiconductors and transistor technology has evolved steadily following Moore’s law, battery technology has been pretty much stale (What time in the afternoon you have to charge your smart-phone on a day you go to work? If it is after 4pm, I want to know what phone you have). There is a great need for better and longer lasting batteries for mobile devices, as well as some kind of technology that feeds itself wirelessly through the signals it receives. Something similar to an RFID tag. Perhaps some day the power consumption of electronic devices will be low enough to get them to charge the battery by means of the actual power the wireless signal carries. Until then, some proposals might help us along the way. For example the wireless electric transmission proposed by the MIT start-up WiTricity.


I am a bit surprised that the author does not highlight too much the security challenges that the IoT will bring to communication systems. In do not think that “[…] Just as with social networking, the privacy concerns of a sensor-­connected world will be fast outweighed by the strange pleasures of residing in it“. I would definitively not feel comfortable at all with my garage door opening when my IoT hub at home, after receiving a message from my car’s geo-location system, sends an “open” command over ZeeWave… specially knowing that someone will show how to hack ZeeWave this summer at Blackhat. I agree with the author, however, in the fact that “[…] our recent hacking epidemic has largely exploited the human interface—the password. We’re always the weak link in online security […]“.

Anyhow, one thing I do know is that in the near future the IoT will change things and our day-to-day lives will look much like the movie Minority Report, with cereal boxes with displays and interactive commercials, personalized advertisements in the subway and smart stores.

Recently I had the pleasure to meet Dr. Ted Rappaport and attend to a very interesting talk he gave at NYU Poly. The topic of the talk was his proposed “renaissance of wireless communications“. It was very exciting to meet him in person given the fact that I pretty much started learning all I know from his book “Wireless Communications: Principles and practices“. I actually realized, when sitting there listening to his talk, that I should have bought my copy of the book to get it signed. After all, his book and Proakis’ “Digital Communications” are the two pillars of everything I like. Learning about the Fourier Transform when I was 19 in school was an eye opening and told me that, indeed, I was in the right place (right major). A couple of classes I took over the following years (COM-1, COM-2 and RadioCom at the ETSETB) required me to read those two books and then I knew that I was on the right major and I also knew what I wanted to do.

Anyhow, back to Rappaport’s talk. I find his view very interesting. Essentially he is proposing to design communication systems on the milliliter-wave range, at very high frequencies, and he is actually proving it possible at NYU Wireless.


These frequency ranges are known for, in some cases, suffer of extreme propagation attenuation due to the interaction of the electromagnetic waves with oxygen molecules, which brings down the signal well over 10dB per kilometer. In this cases with high attenuation, Rappaport is proposing to create “wisper nets” that die off quickly in way less than a meter of propagation. This way, multiple parts of a complex system can be connected, making wires unnecessary. And the fact that these networks have such a short range, one does not have to worry about external attackers sniffing the traffic or injecting stuff in them.

The other frequency ranges suffer from a still reasonable attenuation that, according to some initial results, could host the future 5G wireless systems. These systems would have a huge bandwidth (BW), allowing for great throughput. Although I had the chance to ask a couple of questions, I forgot to ask him whether he thinks that the huge increase in throughput will come purely from increased BW (plenty of available BW at the frequency ranges he is proposing!) or he expects advanced modulation techniques to play a substantial role as well. After all, we are getting close to Shannon’s limits in terms of bits per second per Hertz (bps/Hz).

Based on the observations of Martin Cooper, the capacity of wireless systems has been somehow steadily doubling every 30 months. This increase has been due to (these numbers are extracted from: M.–S Alouini and A.J. Goldsmith, “Area Spectral Efficiency of Cellular Mobile Radio Systems,” IEEE Transactions on Vehicular Technology, vol. 48, no. 4, pp. 1047 – 1066, July 1999) a wider spectrum (25x gain), spectrum splicing (5x), better modulations (5x… these, I believe, was before OFDMA. I wonder if OFDMA increased capacity more than 5 times…) and a huge gain (x1600) by reducing the size of the cells. Although there’s a huge improvement by making cells smaller, it does not make sense to make them much smaller than now (metro-, pico- and femto-cells), so I guess sooner or later we’ll have to look into new directions. And spatial diversity, another topic discussed in Rappaport’s talk, has always been the one I have always seen more promising and suitable. If to that you add a huge BW at the millimeter-wave range, even better!

Working in security for wireless and mobility networks I can tell you something that should not be new for most of you: smart-phones are the newest target for malware programmers and hackers.

The FCC recently published a tool that gives advice and guidelines to maximize the security of your device. You can access it here.

After selecting your device’s OS (Android, iOS, BlackBerry, and Windows Phone options), the tool presents a checklist containing tasks that will help both the physical and software-based security of mobile devices. Among others, the system recommends to set up PINs and passwords and to regularly update the phone’s operating system. In terms of app security, the FCC recommends avoiding jailbreaking or rooting handsets, to only install apps from trusted sources, and to understand what an app will have access to on the phone in terms of permissions before installing it in the first place.

About me:

Born in Barcelona, moved to Los Angeles at age 24, ended in NYC, where I enjoy life, tweet about music and work as a geek in security for wireless networks.
All the opinions expressed in this blog are my own and are not related to my employer.
About me:

Blog Stats

  • 124,284 hits

Twitter feed

Error: Twitter did not respond. Please wait a few minutes and refresh this page.

Enter your email address to follow this blog and receive notifications of new posts by email.