Detecting malicious behaviour in participatory sensing settings

Security is crucial in modern computer systems hosting private and sensitive information. Our systems are vulnerable to a number of malicious threats such as ransomware, malware and viruses.  Recently, a global cyberattack (ransomware) affected hundred of organisations, most notably the UK’s NHS.  This malicious software “locked” the content stored on organisations’ hard drives, requiring money (to be paid in bitcoins) to “unlock” it and make it available back to their owners. Crowdsourcing (the practice of obtaining information by allocating tasks to a large number of people e.g. Wikipedia) is not immune of malicious behaviour. On the contrary, the very openness of such systems make them ideal for malicious users to alter, corrupt or falsify information (data poisoning). In this post, we present an environmental monitoring example, where ordinary people take air quality readings (using mobile equipment) to monitor air pollution of their city or neighbourhood (see our previous post for more details on this example). Arguably, some people participating in such environmental campaigns can be malicious. Specifically, instead of taking readings to provide information about their environment,  they might deviate by following their own secret agenda. For instance, a factory owner might alter the readings showing that their factory pollutes the environment. The impact of such falsification is huge as it basically changes the overall picture of the environment, which in turn leads authorities to wrong actions regarding urban planning.

We argue that Artificial Intelligence (AI) techniques can be of great help in this domain. Given that measurements have a spatio-temporal correlation, a non-linear regression model can be overlaid over the environment (see previous post). The tricky part however is to differentiate between truthful and malicious readings. A plausible solution is to extend the non-linear regression model by assuming that each measurement has an individual and independent noise (variance) from each other (heteroskedasticity). For instance, a Gaussian Process (GP) model can be initially used and then extended to Heteroskedastic GP (HGP). The consequence of this action is that this individual noise can indicate the deviation of each measurement compared to the truthful measurements, which can either be attributed to sensor noise (which is always present in reality) or in malicious readings. An extended version of HGP, namely Trust-HGP (THGP), assigns a trust parameter to the model that captures the possibility of each measurement being malicious between the interval of (0,1).  The details of the THGP model as well as how it is utilised in this domain will be presented end of October at the fifth AAAI conference on human computation and crowdsourcing (HCOMP 2017). Stay tuned!

How AI and humans can optimise air pollution monitoring

Air pollution is responsible for 7 million deaths per year according to World Health Organization (WHO). Thus, it is crucial to dedicate resources to learn and monitor air quality in cities to assist authorities in urban planning as well as bring awareness to people about the impact of air pollution to their everyday life. In our research, we provide the framework and the algorithms, utilising the power of Machine Learning to effectively monitor an environment over time.
In particular, our proposal relies on the willingness of people to participate in environmental air quality campaigns. People can use  mobile air quality devices to take readings in their city or their neighbourhood. However, the major issue is when and where these readings should be taken to efficiently monitor the city. People cannot provide an unlimited number of measurements and thus readings should be taken in a way such that information about the environment is maximised. In other words, we need to solve an optimisation problem constrained on the number of readings people can provide over a period of time to facilitate an efficient environment exploration.
In order to solve the problem, we need to model the environment in a certain way as well as a way to measure the information entailed in each reading (since we are interested in gaining the most information by taking a limited number of readings). To do that, we overlay a spatio-temporal stochastic process over the area of interest (Gaussian Processes). Gaussian processes can be used to interpolate over the environment, i.e., predict the air quality value at unobserved locations as well as predict the state of the environment into the future. Importantly, Gaussian Processes can also be used to provide a measure of uncertainty/information about each location in space and time (by utilising predictive variance).
The problem is evolved into taking a set of measurements such that a utility function, created based on predictive variance provided by Gaussian Processes, is maximised. Going a step forward, to solve this problem, we use techniques and algorithms from the broad areas of Artificial Intelligence and Multi-agent systems.
In particular, an intelligent agent can decide when and where measurements should be taken to maximise information gained about the air quality, while at the same time minimise the number of readings needed. The agent can employ greedy search techniques combined with meta-heuristics such as stochastic local search, unsupervised learning (clustering) and random simulations.
The main idea is to simulate the environment over time, asking what if kind of questions. What if i take a measurement now, and one in the night. What if i take measurement downtown or near the home. These kind of questions are answered  by running simulations on a cluster computing facility.
Finally, our findings indicate a significant improvement over other approaches.

Crowdsourcing categories

In this post I attempt to describe different types of crowdsourcing. This post will be continuously updated with examples, descriptions and potentially new categories.

Participatory sensing is about people carrying special equipment with them and take measurements for monitoring for example an environmental phenomenon.
Crowdsensing is about sharing data collected by sensing devices.
Crowdsourcing is an umbrella term that encapsulates a number of crowd-related activities. Wikipedia has the following definition: Crowdsourcing is the process of obtaining needed services, ideas, or content by soliciting contributions from a large group of people, and especially from an online community, rather than from traditional employees or suppliers.
Online crowdsourcing is about outsourcing online tasks to people. For example people doing tasks for micropayments in Amazon Mechanical Turk.
Citizen science is about assisting scientist in complex (for the machine) and time-consuming tasks (for the scientists). For example, identifying fossils in rocky environments from hundred of pictures or identifying the galaxies of the universe. More interesting projects can be found in zooniverse.org.
Spatial crowdsourcing is about doing tasks that requires participants to go to specific locations to do specific tasks. For instance, taking a photo of a plant that grows in a specific location would require participants to physically go to that location to complete their task.
Mobile crowdsourcing describes crowdsourcing activities that are processed on mobile devices such as smartphones and tablets.
Human computation is a type of collective intelligence and crowdsourcing where humans assist machines in performing tasks that are difficult for them.
Opportunistic sensing is about doing tasks without users active contribution. For example, take a measurement automatically when the device is near some location.

 

If you want to add to the descriptions or disagree with something above feel free to comment below.

Research Internship – Data science/Machine Learning

This post aims to describe my experiences from my three-month research internship at Toshiba Research Labs, Bristol, UK and the project I have been working on (September – December 2015).

I remember the day I first went there for my interview. The building was between a wonderful small square park and a river, and it was just 5 minutes walk from the city centre. But this was not the only thing I really liked. Working there I realised the importance of the culture in a firm. I appreciated the importance of collaboration, brainstorming and creativity. It was an academia-like environment; friendly, down-to-earth people with lots of ideas and knowledge on a variety of subjects. Everyone was approachable and you could discuss with them about anything.  I could communicate with colleagues effectively without having to worry about business formalities.

The project I worked on was intriguing. That was the main reason I applied for this research internship in the first place. It combined my academic interest on Machine Learning and my personal interest on human wellbeing.  In short, the project was about Mood Recognition ar Work using Wearable devices. In other words, understand, learn and attempt to predict someone’s mood (happiness/sadness/anger/stress/boredom/tiredness/excitrment/calmness) using just a wearable device (could be a smart wristband, a chest sensor or anything that is able to capture vital signs). Sounds impossible right? How can you predict such a complicated thing as human emotions? We, as humans, are not able to understand our mood. For example, how would you say you feel right now? Happy, Sad? Ok? This is indicative of the complexity of the problem we were facing. However, we wanted to do unscripted experiments, meaning we did not want to induce any emotions to the participants of our study. We rather wanted them to wear a smart device amd log on their mood in 2-hour intervals while they were still in work as accurate as they could. Surprisingly, at least for me, there was variation in their responses in general. Some higher, some lower but all of them varied. That was encouraging.

We had to study the literature, do some research to answer the following question: How could we extract meaningful features from vital signs and accelerometer signals that will have predictive capabilities in terms of emotions? After some digging around, we found the relevant literature. It was not new concept. There were studies both in Medical literature and in Computer Science, associating heart rate with stress and skin temperature with fatigue. We wanted to take this further. We wanted to check whether a combination of all these could have a more powerful predictive ability. Intuitively, think about the times you felt stressed. Your heart might pumps faster, but sometimes your foot or hand might be shaking as well. These could be captured by the accelerometer and together could be used as an additional indicator stressful situation.

We ended up with hundreds of features, and tested a number of basic machine learning techniques, such as Decision Trees and SVMs.

Our results were good enough, comparable to those in literature. Thus, we decided to publish our findings in the PerCom 2016 conference proceedings (WristSense Workshop)(http://ieeexplore.ieee.org/document/7457166/).

Further, a number of ideas for patents were discussed and exciting new venues for potential work was drawn.

Overall, I would recommend an internship during a PhD programme as it is a very rewarding experience.

I would like to take this opportunity to thank all of the employees, managers, directors there for the unique experience and their confidence in me.

Inference VS Prediction: What do we mean, where they are used and how?

A lot of people seem to confuse the two terms in the machine learning and statistics domain. This post will try to clarify what we mean by the two, where each one is useful and how they are  applied. I personally understood it when I had a class called Intelligent Data and Probabilistic Inference (by Duncan Gillies) in my Master’s degree. Here, I will present a couple of examples in order to intuitively understand the difference.

Inference:

You observe the grass in your backyard. It is wet. You observe the sky. It is cloudy. You infer it has rained. You then open the TV and watch the channel weather. It is cloudy but no rain for a couple of days. You remember you had a timer for the sprinkler a few hours ago. You infer that this is the cause of the grass being wet.

(The creepy example) Imagine you are staring at an object in the evening that is a bit far away in a corner. Getting closer… you observe that the object is staring back at you. You infer that is an animal. You are brave enough and you are getting closer.  You can now see the eyes, the fur, the legs and other characteristics of the animal.  You infer that it is a catA simple procedure for your brain, right? It feels trivial to you and probably stupid to even discuss it. You can of course recognize a cat. But in fact this is a form of inference. Say the cat has some features like: eyes, fur, shape etc. As you get closer to it, you assign different values to these variables. For example, initially eyes variable was set to 0, as you couldn’t see them. As you move closer you are more certain of what you observe. Your brain takes these observations and converts them in the probability that the object is a cat. Say we have a catness variable that represents the possibility of the object being a cat. Initially, this variable could be near zero. Catness is increased as you move closer to the object. Inference takes place and updates your belief about the catness of the object.   Similar example can be found here: http://www.doc.ic.ac.uk/~dfg/ProbabilisticInference/IDAPISlides01.pdf

Prediction:

You observe the sky. It is cloudy. You predict that is going to rain. You hear in the news that the chances for rain despite the clouds are low. You revise and predict that most probably is not going to rain.

Given the fact that you own a cat, you predict that when you come home, you will find it running around.

Final Example:

Understanding the behaviour of humans in terms of their daily routine, or their daily mobility patterns requires the inference of latent variables that control the dynamics of their behaviour. The knowledge of where people will be in the future is prediction. However, prediction cannot be made if we have not inferred the relationships and dynamics, let’s say, of the humans’ mobility.

Verdict:

Inference and prediction answer different questions. Prediction could be a simple guess or an informed guess based on evidence. Inference is about understanding the facts that are available to you. It is about utilising the information available to you in order to make sense of what is going on in the world. In one sense, prediction is about what is going to happen while inference is about what happened. In the book “An introduction to statistical learning” you can find more detailed explanation. But the point is that given some random variables (X1, X2…Xn) or features or, for simplicity, facts, if you are interested on estimating something (Y) then this is prediction. If you want to understand how (Y) changes as random variables change, then it is inference.

In a short sentence:  Inference is about understanding while prediction is about “guessing”.

Submodularity in Sensor Placement Problems

Many problems in Artificial Intelligence and in computer science in general are hard to solve. What practically this means is that it would take a computer probably hundred/thousands/millions of years of computation to solve it. Thus, many scientists tend to create algorithms that approximately solve difficult problems but in a sensible time period, i.e., seconds/minutes/hours.

A problem like this is the sensor placement problem. The key question here is to find a number of locations to place some sensors in order to achieve better coverage of the interested area. In order to solve this problem the computer has to compute all the possible combinations of placing the sensors we have in all different locations. To give some numbers, having 5 sensors and 100 possible locations, one has to try 75287520 combinations in order to find the best arrangement. Imagine what happens when the problem is about placing hundreds of sensors in a city where there are hundreds or thousands of options.

In such problems submodularity comes handy. It is an extremely important property used in many sensor placement problems.  It is a theorem that describes the behavior of functions. In particular, the main idea is that an addition to a small set has a higher return/utility/value rather than adding the same thing to a larger set. This can be better understood with an example. Imagine having 10000 sensors scattered in a big room taking measurements of the temperature every 2 hours. Now imagine adding another sensor to that room. Have we really gained much for doing so? So, we have a large set and we add something. Similarly, imagine the same room having only 1 sensor. Adding 1 more can give us better understanding of probably some corner or get a better estimate of the true average temperature of the room. So, this sensor was much more valuable to have that in the previous case. This is what i mean by saying that adding something to a smaller set has a higher utility.

It turns out that this property is very useful at maths and in computer science and AI in particular as it allows us to build algorithms that have theoretical guarantees. It has been proved that a greedy algorithm has a 63% of the optimal algorithm in terms of performance. This was initially proved from Nemhauser in maths contents and later from Krause et. al in the field of computer science and especially for the sensor placement problem. The image below shows this property in terms of diagrams to get a better feeling of what this property is about.

Submodularity (taken from Meliou et al. power point presentation)
Submodularity (taken from Meliou et al. power point presentation)

Gaussian Process Summer School

Last September I had the opportunity to attend Gaussian Process Summer School, in Sheffield, UK. It is a twice a year event that holds for 3-4 days. First of all, I have to say that it was an awesome experience even if i had no much time to explore the city. Besides, it was heavy raining most of my time there. Well, we had an excursion to a local brewery.

Anyway, the event was structured like full day lectures, everyday, given by experts in the field. And by saying experts I mean  guys like Rasmussen, who has written the famous book on Gaussian Processes (GPs) cited on any paper that includes these two words nowadays, and of course Neil Lawrence who has a whole lab in Sheffield working on Gaussian Processes and organizes this School.

What I enjoyed the most though were the lab sessions scheduled between lectures. It was the perfect time to get our hands dirty. It was a chance to use GPy, a python library that includes almost everything about GPs, developed in Sheffield. I have to admit that GPy seems a lot more powerful tool to have than GPML which I currently (it is a GP library for Matlab). Anyway, the exercises given were perfectly suited to play around with the features of GPy as well as discover the potential of GPy and Gaussian Processes in general. In fact, the exercises were given in ipython notebooks. Ipython notebook is an interactive computational environment, in which you can combine code execution, rich text and mathematics. Specifically, we were given snippets of code that had some crucial parts missing, which we were supposed to fill in.

Another memory from the GP school was that of Joaquin Quiñonero Candela who gave lectures at the summer school as well as the university of Sheffield. Joaquin was previously a researcher at Microsoft and he is now director of research in Applied Machine Learning at Facebook,  where apparently make use of advanced machine learning techniques and push the field to its limits. Importantly Joaquin co-authored papers with Rasmussen  on Gaussian Processes and he seemed to me a brilliant guy.

That is pretty much my experience from this school. In another post, I will introduce GPs and explain as intuitively as i can their usefulness and applicability.

gpss