Open All Close All
26th September 2017 - Dr. Lorena Gutiérrez-Madroñal

Testing IoT systems through IoT-TEG

Internet of Things (IoT) has been increasingly become popular in different areas. One of the main drawbacks of the IoT systems is the amount of information they have to handle. This information arrives as events that need to be processed in real time in order to make correct decisions. As a consequence, new ways (tools, devices, mechanisms...) of obtaining, processing and transmitting information have to be put into action. It is worth mentioning the “Event Processing Languages” (EPL), which were created to detect, in real time, interesting situations in a particular domain. These languages use patterns to filter the information. A huge amount of data is processed and analysed by EPLs, so any programmer error could seriously affect the outcome because of a poor decision making system. Given that processing the data is crucial, testing and analysing programs which run any EPL language is required. The most common mistakes that programmers could make have to be detected. A large number of events with specific values and structures are needed to apply any kind of testing in programs which use EPL. As this is a very hard task and very prone to error if done by hand, a method is presented, which addresses the automated generation of events. This method includes a general definition of what is an event type and its representation is proposed. Additionally, IoT-TEG event generator is developed based on this definition. Results from experiments and real-world tests show that the developed method meets the demanded requirements.

Speaker: Dr. Lorena Gutiérrez-Madroñal
Venue: MB564
Time: 14:00 - 15:00

7th September 2017 - Prof. Krishnamachar Sreenivasan

Flow in Computer Systems

New lines of attack are required to design computers efficiently in view of rapid advances in VLSI technology and increases in software complexity. This note presents a novel approach of portraying computer job execution as multi-phase single stream of instruction, data, and control competing for hardware and software resources. Fluid flow methods, though appealing, are limited to narrow range of problems. A theoretical treatment based on stochastic processes, complemented by measurements is envisaged. Job flow is frequently unstable and understanding factors that lead to flow instability is essential to prevent frequent, annoying, occurrence of ‘Blue screen’, a state, this analysis finds, is caused by eight factors. A job flow Reynolds number, R, is defined as ratio of factors aiding job execution to factors staunching job execution. Application layers frequently employed to increase programmer productivity and advances in computer design meant to exploit Instruction Level Parallelism are found to decrease program performance. Online Transaction Processing workload is stable for values of R, greater than 0.8. Online Transaction Processing Workload was used in controlled experiments to collect five sets of results in which the single processor architectural speed varied from 3.10 to 3.41 cycles per instruction.

Speaker: Prof. Krishnamachar Sreenivasan
Venue: MB146
Time: 15:00 - 16:00

9th May 2017 - Dr. Florian Steinberg

Introduction to second order complexity theory

Classical computability and complexity theory use Turing machines as a foundation for consideration of effectivity (computability) and efficiency (polynomial-time computability) of operations on countable discrete structures. For many applications in engineering it would be desirable to not only compute on countable discrete structures but also to compute on continuous structures like the real numbers. The most common model for computation on the real numbers by digital computers are floating point computations. Modeling real numbers by machine numbers is unsatisfactory from a mathematical point of view as the content of a proof of correctness of a mathematical algorithm is usually completely lost during an implementation. A mathematical rigorous and realistic model for computation on continuous structures is provided by computable analysis. The finite strings as codes for elements of a structure are replaced by total string functions that provide on demand information about the element they encode. Functions between encoded spaces can then be computed by operating on Baire space: The space of all total string functions. Computation on Baire space is done using oracle Turing machines. Intuitively, oracle Turing machines correspond to programs with function calls. It is not a priory clear which of these programs should be considered fast. We introduce the accepted class of polynomial-time computable operators on Baire space, i.e. we specify what programs with function calls should be considered efficient. The definition resembles classical polynomial-time computability very closely (due to work by Kapron and Cook). However, there are some important differences that we investigate in some detail.

Speaker: Dr. Florian Steinberg
Venue: MB108
Time: 11:00 - 12:30

28th March 2017 - Dr. Joao Carreira

Understanding people in videos

The problem of “understanding” people in videos has been a long-standing central challenge in computer vision and artificial intelligence. In this talk I will first discuss recent technical advances in human pose estimation using models that make iterative passes through convolutional networks. I will then describe a novel (sorely needed) dataset for human action recognition, gathered from YouTube, having an order of magnitude more videos than existing datasets. I will show that this dataset enables a new type of spatiotemporal models which obtain results considerably above the state-of-the-art on popular benchmarks.

Speaker: Dr. Joao Carreira
Venue: MB220
Time: 14:00 - 15:00

21st March 2017 - Dr. Leandro Minku

Online Ensemble Learning of Data Streams with Gradually Evolved Classes

In machine learning, class evolution is the phenomenon of class emergence and disappearance. It is likely to occur in many data stream problems, which are problems where additional training data become available over time. For example, in the problem of classifying tweets according to their topic, new topics may emerge over time, and certain topics may become unpopular and not discussed anymore. Therefore, class evolution is an important research topic in the area of learning data streams. Existing work implicitly regards class evolution as an abrupt change. However, in many real world problems, classes emerge or disappear gradually. This gives rise to extra challenges, such as non-stationary imbalance ratios between the different classes in the problem. In this talk, I will present an ensemble approach able to deal with gradually evolved classes. In order to quickly adjust to class evolution, the ensemble maintains a base learner for each class and dynamically creates, updates and (de)activates base learners whenever new training data become available. It also uses a dynamic undersampling technique in order to deal with the non-stationary class imbalance present in this type of problem. Empirical studies demonstrate the effectiveness of the proposed approach in various class evolution scenarios in comparison with existing class evolution approaches.

Speaker: Dr. Leandro Minku
Venue: MB220
Time: 14:00 - 15:00

14th March 2017 - Dr. Mike Joy and Dr. Meurig Beynon

CONSTRUIT!

CONSTRUIT! is a three year Erasmus+ project, involving six partners led by the University of Warwick and scheduled for completion in August 2017, on the theme of "Making construals as a new digital skill for creating interactive open educational resources".

Where programs reflect the practices of a mind following rules, construals are artefacts developed by a mind making sense of a situation. In this respect, a construal is well-matched to the unconventional role that Seymour Papert had in mind for programs in his constructionist approach to learning: that of objects-to-think-with whose construction obliges the learner to reflect on the basis for their knowledge and in that process enrich their domain understanding.

This seminar will discuss the significance of making construals in relation to three research topics motivated by Papert's work:

  • principles and practices that support constructionism;
  • objects-to-think-with as a basis for novel learning resources;
  • exploiting constructionist principles in the classroom and curriculum.
It will take the form of a reflective illustrated account of the work that has been done towards developing, deploying and evaluating techniques and environments for making construals in the course of CONSTRUIT!.

Speakers: Dr. Mike Joy and Dr. Meurig Beynon
Venue: MB220
Time: 14:00 - 15:00

7th March 2017 - Dr. William Langdon

Long-Term Evolution in Genetic Programming

We evolve 6-mux populations of genetic programming binary Boolean trees for up to 100,000 generations. As there is no bloat control, programs with more than a hundred million nodes may be created by crossover. These are by far the largest programs yet evolved. Our unbounded Long-Term Evolution Experiment LTEE GP appears not to evolve building blocks but does suggests a limit to bloat.

We do see periods of tens even hundreds of generations where the whole population is functionally converged. In contrast to wetware LTEE experiments with bacteria (genome 4.6 million base pairs in length and 66000 generations), we do not see continual innovation, but instead although each tree in the population may be different, they all have the same phenotype (in that they can all solve the multiplexor benchmark) and the code next to the tree's root becomes highly stable.

We test theory about the distribution of tree sizes. Surprisingly in real finite populations with typical GP tournament selection we do see deviations from crossover only theoretical predictions.

Speaker: Dr. William Langdon
Venue: MB220
Time: 14:00 - 15:00

21st February 2017 - Eike Neumann

Representations for feasibly approximable functions

Given a continuous real function, two of the most basic computational tasks are the computation of its integral and its range. Both problems are generally perceived to be ``easy'' by practitioners (given that the domain is one-dimensional). Hence it was surprising when Ko and Friedman in 1982 proved that these problems are #P-hard and NP-hard, respectively.

Our hypothesis is that this discrepancy is due to the fact that complexity theorists use the simplest natural representation of continuous real functions, which treats all polytime computable functions equally. Practitioners, on the other hand, use representations which are biased towards a small class of functions that typically occur in practice. We evaluate this hypothesis using both theoretical and practical tools.

Building on work by Labhalla, Lombardi, and Moutai (2001) and Kawamura, Mueller, Roesnick, and Ziegler (2015), we review several common admissible representations of continuous real functions, study their polynomial-time reducibility lattice and benchmark their performance using the AERN Haskell library for exact real computation.

We include the standard continuous function representation used in computational complexity theory where all polytime computable functions are polytime representable.

The other representations we study are all based on rigorous approximations by polynomials or rational functions with dyadic coefficients. In these representations maximisation and integration are feasibly computable but not all polytime computable functions are polytime representable.

We show that the representation by piecewise-polynomial approximations is equivalent to the representation by rational function approximations with respect to polynomial-time reducibility.

These two representations seem to form a sweet spot regarding the trade-off between the ability to feasibly represent a large number of functions and the ability to feasibly compute operations such as integration and maximisation.

Speaker: Eike Neumann
Venue: MB220
Time: 14:00 - 15:00

14th February 2017 - Dr. Anakreontas Mentis

Productivity tools for a legacy interpreted programming language

Phoebus Software Ltd is a leading provider of software for the management of lending and savings at financial institutions. Phoebus has been able to produce high quality reliable software fast with the help of their in-house programming language called P4. P4 has features for rapid development of complex form-based database-backed applications. However, P4 was designed 20 years ago and lacks tools present in modern programming languages such as code checkers and IDEs. Moreover, P4 is interpreted and supports code changes on-the-fly when deployed. This dynamic nature of the language has become an obstacle as the code base has grown very large. We describe how we improved the definition of the P4 language and produced a validator that, when integrated with an IDE, identifies various classes of programming defects while editing a P4 program. In particular, we have added a type system to P4 and defined finite-state models for the database interaction. We also give an overview of the technology used under the hood, namely the Haskell functional programming language, the Parsec parser library and the Hoopl library for control flow analysis.

Speaker: Dr. Anakreontas Mentis
Venue: MB220
Time: 14:00 - 15:00

7th February 2017 - Dr. Dimitris Kolovos

Towards Scalable Model-Driven Engineering

Model-Driven Engineering (MDE) is a software engineering approach that promotes domain-specific models as first-class artefacts of the software development and maintenance lifecycle. As MDE is increasingly used for the development of larger and more complex software systems, the current generation of modelling and model management technologies are being pushed to their limits.

In this talk I will provide an overview of some of the most important scalability challenges that manifest when working with large (collections of) domain-specific models. I will then go through ongoing work that attempts to address these challenges by providing support for parallel and reactive code generation, partial model loading, and model indexing.

Speaker: Dr. Dimitris Kolovos
Venue: MB220
Time: 14:00 - 15:00

24th January 2017 - Raghavendra Raj

Business Intelligence Solution for an SME: a Case Study

Business Intelligence (BI) leverages the usefulness of existing information. It equips business users with relevant information to perform various analyses to make key business decisions. Over the last two decades, BI has become a core strategy for the growth of many companies, in particular large corporations. However, studies show that small and medium-sized enterprises (SMEs) lag behind in implementation and exploitation of BI solutions. To stay ahead of the competition, SMEs must be able to monitor and effectively use all of their resources, in particular information resources, to assist them in making important business decisions. We have examined the challenges such as lack of technical expertise and limited budget when implementing a BI solution within an SME in the UK. In light of our experiences in tackling these issues, this seminar discusses how these challenges can be overcome through applying various tools and strategies and the potential benefits.

Speaker: Raghavendra Raj
Venue: MB220
Time: 14:00 - 15:00

20th January 2017 - Dr. Stephen Marsh

Slow Computing, Wisdom, and ideas for Comfort-able Answers to Fake News

Remember Flash Crashes? Computing is fast, by default. That's good, but there are times when it does to slow down to the speed of thought and consider what the fast decisions might result in, not far down the line. More, it behooves us to think more about the people in the system, and how they can help the system be 'more'. This idea, the concept of Slow Computing, grew from discussions at Dagstuhl about a year ago, and gradually began to contribute to explorations of Wisdom in computational systems. Wisdom, the capacity for contextually guided rational and correct thought in unfamiliar situations, seems exactly the kind of thing we need to bring our computational systems into the human world, where they are going to have to be. This talk presents our thoughts and research on Slow Computing and Wisdom before diving into the related concepts of Device Comfort and Computational Trust, and ends with a look at how thinking more slowly and integrating and comfort and trust reasoning into information systems might just help us in some of the more pressing challenges of social media.

Speaker: Dr. Stephen Marsh
Venue: MB146
Time: 14:00 - 15:00

29th November 2016 - Dr. Yulan He

Unsupervised Event Extraction and Storyline Generation from Text

This talk consists of two parts. In the first part, I will present our proposed Latent Event and Categorisation Model (LECM) which is an unsupervised Bayesian model for the extraction of structured representations of events from Twitter without the use of any labelled data. The extracted events are automatically clustered into coherence event type groups. The proposed framework has been evaluated on over 60 millions tweets and has achieved a precision of 70%, outperforming the state-of-the-art open event extraction system by nearly 6%. The LECM model has been extended to jointly modelling event extraction and visualisation in which each event is modelled as a joint distribution over named entities, a date, a location and event-related keywords. Moreover, both tweets and event instances are associated with coordinates in the visualization space. Experimental results show that the proposed approach performs remarkably better than both the state-of-the-art event extraction method and a pipeline approach for event extraction and visualization.

In the second part of my talk, I will present a non-parametric generative model to extract structured representations and evolution patterns of storylines simultaneously. In the model, each storyline is modelled as a joint distribution over some locations, organizations, persons, keywords and a set of topics. We further combine this model with the Chinese restaurant process so that the number of storylines can be determined automatically without human intervention. The proposed model has been evaluated on three news corpora and the experimental results show that it generates coherent storylines from new articles.

Speaker: Dr. Yulan He
Venue: MB404A
Time: 14:00 - 15:00

15th November 2016 - Dr. David Sanderson

Advanced Manufacturing: An Application Domain for Adaptive Systems Research

This talk will discuss manufacturing as an application domain and some of the research being done at the Institute for Advanced Manufacturing at the University of Nottingham. The talk will be grounded in real demonstration scenarios designed to address industrial problems. Particular detail will be given to the adaptive agent-based architectural concept and an approach for determining the realisability (or manufacturability) of products in a "batch-size-of-one" situation, where each product being made in a system may be unique.

Speaker: Dr. David Sanderson
Venue: MB404A
Time: 14:00 - 15:00

11th October 2016 - Dr. Antonio Garcia-Dominguez

From linked files to NoSQL graphs: analysis of Eclipse projects

Hawk [1] is an indexing solution that can monitor collections of structured files, mirror them into typed graphs, and query them in an efficient and concise way. Nodes can be indexed by attribute values, and types can be extended with derived attributes and edges depending on the queries to be done.

Hawk has been recently extended with the capability for reading the metadata that links Eclipse plugins together and groups them into high-level projects. In this talk, I will introduce the concepts behind Hawk and discuss the state of our current studies on the eclipse.org codebase. I am looking for feedback on our current approach and pointers to structural pattern recognition approaches that may be useful for this software repository mining problem.

[1]: https://github.com/mondo-project/mondo-hawk

Speaker: Dr. Antonio Garcia-Dominguez
Venue: MB404A
Time: 14:00 - 15:00