Open All Close All
27th November 2018 - Dr. Stefan Tillich

"Smart Cards: Trust Anchors in a Sea of Risk"

Abstract: Digital processing and communication systems are continuing their spread into all avenues of life. Numerous contemporary buzzwords like Industry 4.0, Internet of Things, Car2Car communication, and Mobile Payment are markers of this ongoing and ever accelerating development. While the benefits of this digitalization are often clear and undisputed, it is important not to forget that the other side of the coin is an increase of risk for the various stakeholders of these new systems. For example, intra-car communication can clearly help to increase drivers' situational awareness and thus greatly improve safety. On the other hand, malicious parties pretending to be cars and sending spoofed messages to vehicles may pose significant security and safety hazards. Out best defense against such threats are the correct application of well-designed security protocols. Virtually all such protected systems rely on so-called trust anchors to provide the basic security assurances from which all other security assurances derive from. This talk will examine the role of modern smart card technology to serve as trust anchors in modern digital systems. We will look at capabilities and integration of state-of-the art smart card modules, current threats and attacks as well as have an outlook towards possible future enhancements..

Short Bio: Stefan Tillich is CTO at Yagoba GmbH, an Austrian-based company focused on providing embedded security solutions. As such, he has extensive experience with designing and building secure systems and the integration and use of smart card modules. Previously, Stefan has been research associate at the University of Bristol and before that project senior scientist at the Graz University of Technology, where he performed research in the fields of applied cryptography and secure systems.

Speaker: Dr. Stefan Tillich
Venue: TBA
Time: 14:00 - 15:00

Wednesday 21th November 2018 - Dr. Carlos Cetina

"Localisation of Features, Bugs, and Requirements in Software Models: Industrial experiences"

Abstract: Feature Localisation, Bug Localisation, and Requirements traceability are among the most popular activities performed in the context of Software Engineering. The day after industry adopts Model Driven Software Engineering (MDSE), the popularity of the former activities is not going to fade away. The good news is the abstraction of MDSE models should ease these popular activities. The bad news is these activities have been neglected in the context of MDSE. As example, recent surveys in top Software Engineering journals do not identify any localisation approach that targets software models. In this talk, we are going to go through the efforts performed at two industrial case studies (Induction Hobs of BSH Group, and Train Control & Management Software of CAF) to achieve the above localisation activities in models. These efforts range from Information Retrieval to Machine Learning, and include the dimension of Search-based Software Engineering. Results are not perfect, but we are going to discuss if they are up to the task.

Short Bio: Carlos Cetina is an associate professor at San Jorge University and the head of the SVIT Research Group (visit svit.usj.es). His research focuses on the intersection between Software Product Lines and Model-driven Development. Recently, Cetina has become a member of the Search-based Software Engineering community by showing how to improve localisation activities in software models. Cetina received a PhD in computer science from the Universitat Politècnica de València. More information about his background can be found at his website: carloscetina.com.

Speaker: Dr. Carlos Cetina
Venue: the space of the Research Institute on 3rdFloor- no label on the door yet BUT it is right next to MB304.
Time: 14:00 - 15:00

13th November 2018 - Dr. Ben Shreeve

"Title Decisions & Disruptions: Using lego to help organisations learn about cyber security"

Abstract: Decisions & Disruptions origins is a table top exercise designed originally to help research how organisations make cyber security investment decisions. The exercise presents small teams of participants with a simplified lego representation of a power generation company. Teams are given four rounds to play through and a finite budget to spend on cyber security investments. At the end of each round they suffer any number of cyber attacks based on the choices made. In this seminar I will provide a brief overview of the game, some of our initial findings and summarise how the exercise has been adopted by the metropolitan police to help raise awareness of cyber security implications in a wide range of organisations.

Short Bio: Dr. Ben Shreeve is a research associate at the University of Bristol, UK, an Academic Centre of Excellence in Cyber Security Research. Over the past 18 months he has been working on the MUMBA (Multi-faceted Metrics for ICS Business Risk Analysis) project. This has involved running sessions using the Decisions & Disruptions game with both industry and government organisations to help promote dialogue and awareness of cyber security risks and decision making. He has also been responsible for training a number of government teams to run these games themselves with a wide variety of stakeholders. He has recently completed a Ph.D. at Lancaster University exploring the differences in decision-making and creativity approaches used by traditional co-located and geographically distributed (virtual) teams. As part of this research he was invited as a visiting academic to the University of Auckland, NZ, for 8 months.

Speaker: Dr. Ben Shreeve
Venue: the space of the Research Institute on 3rdFloor- no label on the door yet BUT it is right next to MB304.
Time: 14:00 - 15:00

6th November 2018 - Dr. Laurence Tratt

"Why aren't more users more happy with our VMs?"

Abstract. Programming language Virtual Machines (VM)s are now widely used, from server applications to web browsers. Published benchmarks regularly show that VMs can optimise programs to the same degree as, and often substantially better than, traditional static compilers. Yet, there are still people who are unhappy with the VMs they use. Frequently their programs don't run anywhere near as fast as benchmarks suggest; sometimes their programs even run slower than more naive language implementations. Often our reaction is to tell such users that their programs are "wrong" and that they should fix them. This talk takes a detailed look at VM performance, based on a lengthy experiment and a new statistical technique for analysing warmup time: we not only uncovered unexpected patterns of behaviour, but found that VMs perform poorly far more often than previously thought. I will draw on some of my own experiences to suggest how we may have gotten into such a pickle. Finally, I will offer some suggestions as to how we might be able to make more VM users more happy in the future.

Bio: Laurence Tratt is a Reader in Software Development at King's College London, where he leads the Software Development Team. His past work includes VM optimisation techniques (e.g. "storage strategies") and language composition (e.g. "PyHyp", the first composition of two real-world languages). More at https://tratt.net/laurie/.

Speaker: Dr. Laurence Tratt
Venue: TBA
Time: 14:00 - 15:00

30th October 2018 - Lorena Gutiérrez-Madroñal

"Relevant situations in IoT system, how to test them?: a case study"

Abstract: The Internet of Things (IoT) is being applied to different areas. The main characteristic of these systems is the management of a huge amount of information that arrives as events. This management is an issue because according to the received data real-time decisions have to be made according to the relevant situations that want to be detected. After some analysis, we have realised that the majority of that relevant situations follow a specific behaviour. So, to test those systems, we need to generate test events following that specific behaviour. We have developed the IoT-TEG tool, which allows us to automatically generate test events of any event type. Thanks to its implementation we have improved it with a new functionality to generate events with the desired behaviour. IoT-TEG has been used with an ongoing fall detection IoT system, which helped in the development of the new functionality.

Short Bio: Lorena Gutiérrez-Madroñal received her first-class Honours Degree in Computer Systems Management in 2007, her BSc in Computer Science in 2009, her Master of Advanced Studies in Computer Science in 2010 and her PhD in 2017 at the University of Cádiz (Spain). She has been working at the Department of Computer Science and Engineering as a full-time lecturer since 2009. Her research is focused on the Internet of Things and test event generation for any event-processing program. In order to prove the usability of the test generated events, she is using them to apply mutation testing to EPL query languages, such as the Event Processing Language (EPL). She has participated in research projects, all involved in software engineering related aspects. She has served in programs and organizing committees at different conferences. She is a researcher of the UCASE Software Engineering Research Group.

Speaker: Lorena Gutiérrez-Madroñal
Venue: TBA
Time: 14:00 - 15:00

9th October 2018 - Dr. Paul Grace

"Managing Risks using Runtime Models"

Abstract. The increasing complexity of distributed systems, and the move towards the dynamic composition of systems of systems means that traditional methods to handle security and privacy threats at design time do not take into account the uncertainty about (and emergence of) new threats. Runtime models and their associated dynamic middleware solutions offer: i) the ability to identify and reason about increasing risk, and ii) to dynamically adapt software and systems to mitigate threats and reduce risk. In this talk I will present results from two research projects that cover initial forays into this avenue of research. Firstly Operando, this examines the use of runtime models of privacy to handle threats to an individual’s privacy (as opposed to blanket privacy guarantees). Secondly, RestAssured which uses a runtime model of assets, threats and misbehaviours to perform runtime risk analysis of Cloud-based Systems.

Bio: Paul Grace is a senior research fellow in the School of Electronics and Computer Science at the University of Southampton. His current work looks at the engineering of secure and trustworthy adaptive systems; in particular using runtime models to identify and mitigate dynamic security and privacy threats. Previously, he has been an enterprise fellow at the IT Innovation Centre and before that a researcher at both Lancaster University and the Katholieke Universiteit Leuven.

Speaker: Dr. Paul Grace
Venue: TBA
Time: 14:00 - 15:00

2nd October 2018 - Dr. Nicolas Cardozo

Learning to Adapt from Past Behaviour"

Realization of adaptive behavior in software systems is usually predefined by developers. During the development process, developers specify the base behavior of the system along side the adaptations they foresaw for the system, together with the situations in which these can be take place. While this model offers a dynamic behavior of the system in face of different situations, it is unfortunately, a restrictive model in terms of the adaptations that can be observe in the model. In this talk we discuss an adaptation model that enables the generation of run-time adaptations based on previous executions of the system. The model brings together the learning process of autonomous self-adaptive systems with the language abstractions of context-oriented programming. The union of these models brings offers the flexibility and modularity required to realize truly adaptive systems. We envision the applicability of this model for new interactive systems, like smart environments, cyber physical systems, or the IoT.

Bio: Nicolás is an assistant professor at the Universidad de los Andes in Bogotá, Colombia. His research focuses in the development, verification, and application of adaptive systems from a programming language perspective. Nicolás’ focuses on the design and implementation of programming languages offering abstractions to foster dynamic adaptation, under the umbrella of Context-oriented Programming. In recent years Nicolas, has focused on the dynamic verification of adaptive systems, incorporating dynamic verification techniques in context-oriented languages, to assure the consistency, correctness, and completeness properties of adaptive systems. Moreover, as an application domain for dynamic adaptations, Nicolás develops smart- and IoT applications that take into account their context of usage and can adapt accordingly.

Nicolás received his Ph.D. from the Université catholique de Louvain and the Vrije Universiteit Brussel in Belgium, and participated in postdoctoral fellowships in Belgium and Ireland before joining Universidad de los Andes.

Speaker: Dr. Nicolas Cardozo
Venue: TBA
Time: 14:00 - 15:00

24th April 2018 - Dr. Kirstie L. Bellman

Extending Our Concepts of Self-Optimization and Self-Improvement: From Tasks to Situations

Current self-optimization generally makes the assumption that the system has been given a set of goals, operating constraints, and requirements; then the system continuously adjusts its repertoire of methods to adapt to the current operational conditions in order to fulfill those requirements and, even better, to improve on its performance for those requirements. This is an important approach that has borne much fruit!

However, it also can lead to a singularly reactive and non-future thinking system – a system driven by the current job at hand and one where the system never gets out ahead of current conditions.
Consider some of the characteristics of natural systems: Animals are willing to sacrifice current goals for better benefits later (even extending into altruism at the societal level.) This type of optimization includes what is needed for a better next move or better positioning (e.g., taking higher ground, building a burrow, retreating and running away to survive long enough to regroup, letting the ball go past one so one can protect the goalie, etc.) Animals also demonstrate merging of goals (which can even mean underperforming on all current goals to some extent in order to fulfill several goals in the current operational situation.) Animals also have reasoning processes that reflect on past experience in order to discover root causes of changes in the operating conditions or environment or the system itself. They also learn to develop better automated responses and ‘train’ (play) to systematize future reactions and to discover how best to use their individual capabilities in a given environment. As part of that, they invest in exploratory behavior in order to have better future knowledge of the environment and potential applicable behaviors. All of these and many more behaviors and cognitive capabilities are examples of stepping back from the immediate accomplishment of a goal or a performance of an action in order to improve the system’s overall situation. We are using the term “situation” in the technical sense, as defined by both the Cognitive Simulation community and the self-awareness community, where situation includes at least the elements of the situation, e.g., objects, events, people, systems, environmental factors; their current states, e.g., locations, conditions, modes, and actions, and the system’s context information and goals.

In this talk, we focus on a subset of the adaptive capabilities noted above, giving examples of the abilities of animal systems to satisfy multiple goals by combining behaviors, to forego obtainable immediate goals in order to better position themselves for improved future behaviors and to even alter their environments in order to improve the likelihood of successful goal outcomes and their overall situation. We will then examine the implications of each of the above for how we might want to better use our current self-optimization methods and approaches and where we might want to add additional capabilities to our repertoire of methods. This will include some examples of using different timescales in our self-optimization approaches and having them operate in parallel, as well as introducing multiple goals into our self-improvement processes. We will also discuss what is needed to reason about better ‘future positioning.

A version of this address was created as the Keynote for SAOS 2018 and the author wishes to thank that community for the inspiration leading to this talk.

Speaker: Dr. Kirstie L. Bellman
Venue: MB246A
Time: 14:00 - 15:00

17th April 2018 - Dr. Antonio Garcia-Dominguez

Towards temporal queries for evolving linked data

Engineering firms build linked collections of complex artefacts that evolve over time: floor plans, software requirements and architectures, control processes, or simulation programs, among others. The execution state of a running system can also be seen as a complex graph whose nodes, edges and values change over time, e.g. in self-adaptive applications. In both cases, we may want to derive metrics or effective workflows from their evolution over time, detect when certain situations may have happened, or compare snapshots at certain key points in time. With this in mind, I have started working on the extension of the Hawk model indexer so it can produce a temporal graph from collections of structured and linked files stored in standard version control systems, and I have drafted what the extended query language would look like. In this talk, I will introduce prior works in temporal querying over object-oriented databases, event streams and object-oriented system models, present the first steps taken with the Greycat many-worlds temporal graph technology, and set out a research roadmap for the short- and mid-term future.

Speaker: Dr. Antonio Garcia-Dominguez
Venue: MB246A
Time: 14:00 - 15:00

16th February 2018 - Dr. Joao Filipe Ferreira

Human-Robot Interaction - The Need for Skills

In this talk, I will present my personal outlook on HRI for the near future. Service and assistive robots are still far from being capable of maintaining long-term relationships with humans – in current roadmaps for robotic research in the future, the keywords “long-term” or synonyms are constantly repeated concerning cognition, and “slow, enduring change and development” in artificial cognitive systems is preferred over “one-shot, fast learning and adaptation” and “static, repetitive or limited flexibility”, which are recognised as the common traits of current technologies. In recent years, a considerable effort has been devoted to researching perception and decision processes for artificial cognitive systems. As a consequence, HRI technologies and corresponding cognitive capabilities of robotic systems have seen many developments in the last few decades, enabling service and assistive robots to exhibit sufficient social skills to maintain basic short-term interactions with humans. Nevertheless, HRI technologies are still far from providing the degree of social capabilities to rival a human. This restricts most of the current socially interactive robots to controlled environments and highly specialised tasks. First of all, an integrated approach encapsulating, interconnecting and consolidating the basic skills mentioned above to tackle generic and unconstrained settings is clearly missing. On the other hand, research efforts that have led to current artificial cognitive systems driving socially interactive robots have not yet produced a convincing overall approach to crucial aspects to deal in the long haul with information gathered through experience, context awareness and deduction. Therefore, I would like to propose to my audience that there is a need for (1) exploring what current - hot! - techniques and computational tools such as deep learning or probabilistic methods, and also advances in technologies such as SoCs, GPUs and programmable logic have to offer in this respect (2) use these to take a step back and jumpstart an additional wave of fundamental research in modelling and implementing basic perceptual and low-level (“involuntary”) cognitive skills. The resulting frameworks would serve as middleware for higher-level cognition in robotics, providing a standardised way of accessing pre-processed and prioritised sensory information for decision-making and complex planning and action. They would be inspired by the human brain at a functional level, taking cross-disciplinary advantage of recent advances in psychology and neuroscience, and as such would naturally endow the robot with the capability to instil a sense of intentionality and reciprocity in HRI.

Speaker: Dr. Joao Filipe Ferreira
Venue: MB404A
Time: 14:00 - 15:00

13th February 2018 - Prof. Jeremy Pitt

Interactional Justice vs. The Paradox of Self-Amendment and the Iron Law of Oligarchy

Self-organisation and self-governance offer an effective approach to resolving collective action problems in multi-agent systems, such as fair and sustainable resource allocation. Nevertheless, self-governing systems which allow unrestricted and unsupervised self-modification expose themselves to several risks, including Suber's paradox of self-amendment (rules specify their own amendment) and Michel's iron law of oligarchy (that the system will inevitably be taken over by a small clique and be run for its own benefit, rather than in the collective interest). This talk will present an algorithmic approach to resisting both the paradox and the iron law, based on the idea of interactional justice derived from sociological, political and organizational theory. The process of interactional justice operationalised in this talk uses opinion formation over a social network with respect to a shared set of congruent values, to transform a set of individual, subjective self-assessments into a collective, relative, aggregated assessment. Using multi-agent simulation, we present some experimental results about detecting and resisting cliques. We conclude with a discussion of some implications concerning institutional reformation and stability, ownership of the means of coordination, and knowledge management processes in `democratic' systems.

Speaker: Prof. Jeremy Pitt
Venue: MB231
Time: 14:00 - 15:00

23rd January 2018 - Dr. Errol Thompson

45 years of computer science

In this seminar, Errol will review 45 years of computer science looking at some of the things that have changed and some of the issues that have remained the same. Errol will review some of his original work on programming languages, computer architecture, and operating systems. Having worked in both academia and industry, Errol has both an academic and practitioner understanding of the issues.

Speakers: Dr. Errol Thompson
Venue: MB486
Time: 14:00 - 15:00

5th December 2017 - Prof. Alan Dix

Open Data Islands and Communities

How do we make digital technology serve those at physical and social margins of society? Digital technology, not least the internet, has transformed many aspects of our lives. Crucially, in many countries access to digital technology has become an essential part of the nature of modern citizenry for commercial services; for access to access to government, and for participation in democratic processes, for example much of the UK Brexit and US Presidential campaigns were fought on Facebook. However, the ability take advantage of digital technology is not uniform, those at the margins typically have disproportionately poor access, both in terms of physical connectivity and skills. There is a danger that digital technology can deepen the existing divides in our world. In this talk I will look at these issues and most importantly ways we can, as researchers and practitioners, seek to create technologies that serve all communities. I will focus particularly on open data, how we can devise ways to make it more easily found, accessed, and visualised by small communities at the edges, and moreover how they can become active creators of information: producers not merely subjects of data. I will draw on experience in a number of projects on the small Scottish island of Tiree and also my 1000 mile walk around the edges of Wales.

Speakers: Prof. Alan Dix
Venue: MB373
Time: 14:00 - 15:00

28th November 2017 - Dr. Roisin McNaney

Exploring the potential for monitoring Parkinson's symptoms through IoT

Dr. McNaney recently ran a 2 day workshop that aimed to bring together practitioners, researchers, designers and citizens from the Parkinson's community to explore how commercial IoT technologies might be used to support people with Parkinson's in their day to day lives; to understand more about their condition and facilitate better care planning discussions with their clinicians. This seminar will be a summary of the workshop themes and findings and the identify space for future work to move forward with.

Speakers: Dr. Roisin McNaney
Venue: MB373
Time: 14:00 - 15:00

24th October 2017 - Dr. Christopher Buckingham

An odyssey of sirens, sorcery, and shipwreck for mental health informatics

It is notoriously difficult to get research out into practice for medical informatics in general and clinical decision support systems (CDSSs) in particular. Mycin was one of the earliest and most famous CDSS but was never used even though it had good laboratory performance. This pattern has been repeated ever since and much has been written on the barriers in the way of health informatics. This talk will explore these barriers using the experiences of the developers of GRiST, a CDSS for assessing and managing risks associated with mental health problems. GRiST was based on ideas coming out of a PhD thesis and was very much driven by a research agenda in the first stages. It began in 2000 and was first rolled out as a software system within secondary-care mental health in 2010. It's reach has extended to primary care and the community since then but each step has been slow and painful. This presentation will use the metaphor of Homer's Odyssey as an illustration of the perils and pitfalls that make the journey from research conception to the real world so painful. The journey's end is increasingly important from the academic REF perspective but also if we want to benefit society in general.

Speakers: Dr. Christopher Buckingham
Venue: MB753
Time: 14:00 - 15:00

24th October 2017 - Prof. Robert Kowalski

LPS as Step towards a Unifying Framework for Computing

Computer Science today rests upon shaky foundations of multiple, competing languages and paradigms. On the one hand we have imperative languages for programming, and on the other hand we have declarative languages for program specification, databases and knowledge representation. LPS (logic-based production system) aims to reconcile imperative and declarative representations, by giving a logical interpretation to imperative modes of expression. LPS includes logic programs, which are sets of sentences of the form conclusion if conditions, and treats them as procedures to reduce problems that match the conclusions to subproblems that match the conditions. It also includes reactive rules of the form if antecedent then consequent, which are a logical reconstruction and generalisation of production system rules. Logic programs in LPS can be regarded as representing the beliefs of an intelligent agent, and reactive rules as representing the agent’s goals. Computation in LPS can be understood as attempting to satisfy a global imperative to make the agent’s goals true, by performing actions to make consequents true whenever antecedents become true. Arguably, this way of understanding computation makes LPS not only a practical computer language, but also a scaled-down model of human thinking. In my talk, I will demonstrate an open-source, web-based prototype of LPS, which was developed with Fariba Sadri and Miguel Calejo, to support the teaching of computing and logic in schools. The prototype is accessible from http://lps.doc.ic.ac.uk/.

Speakers: Prof. Robert Kowalski
Venue: MB231
Time: 14:00 - 15:00

17th October 2017 - Prof. Andrea Torsello

Partiality and Localization in Functional Correspondences

The functional maps is a framework for dense shape correspondences modeled as a linear operator between spaces of functions on the shape manifolds. While functional maps can be made resilient to missing parts or incomplete data, overall this framework is not suitable for dealing with partial correspondence and suffers from lack of localization of the point-correspondence due to the band-limited nature of the correspondence. In this presentation I will briefly introduce the formalism and then propose recent work trying to address both issues. We use perturbation analysis to show how removal of shape parts changes the Laplace-Beltrami eigenfunctions, and exploit it as a prior on the spectral representation of the correspondence. Further, we show how a change from the low rank Fourier basis to a sparse spatial basis can improve correspondence-localization even in presence of partiality.

Speakers: Prof. Andrea Torsello
Venue: MB231
Time: 15:00 - 16:00

10th October 2017 - Dr. Tony Beaumont and Dr. Antonio Garcia-Dominguez

Google's Faculty Curriculum Workshop

In July, Google invited two members of staff from several UK universities to their pilot of the week-long Faculty Curriculum Workshop to participate in a series of talks and discussions about Google's hiring process and working conditions, what they are looking for and how we could tweak our teaching to make our graduates more attractive to Google and other similar employers. We went there on behalf of Aston. In this talk, we will talk about how the week went before opening a discussion on how we could best benefit from the FCW conclusions by integrating Google staff into our teaching and events, letting students know about opportunities at Google, and improving our teaching in terms of project-based learning, data structures and problem solving.

Speakers: Dr. Tony Beaumont and Dr. Antonio Garcia-Dominguez
Venue: MB231
Time: 15:00 - 16:00

26th September 2017 - Dr. Lorena Gutiérrez-Madroñal

Testing IoT systems through IoT-TEG

Internet of Things (IoT) has been increasingly become popular in different areas. One of the main drawbacks of the IoT systems is the amount of information they have to handle. This information arrives as events that need to be processed in real time in order to make correct decisions. As a consequence, new ways (tools, devices, mechanisms...) of obtaining, processing and transmitting information have to be put into action. It is worth mentioning the “Event Processing Languages” (EPL), which were created to detect, in real time, interesting situations in a particular domain. These languages use patterns to filter the information. A huge amount of data is processed and analysed by EPLs, so any programmer error could seriously affect the outcome because of a poor decision making system. Given that processing the data is crucial, testing and analysing programs which run any EPL language is required. The most common mistakes that programmers could make have to be detected. A large number of events with specific values and structures are needed to apply any kind of testing in programs which use EPL. As this is a very hard task and very prone to error if done by hand, a method is presented, which addresses the automated generation of events. This method includes a general definition of what is an event type and its representation is proposed. Additionally, IoT-TEG event generator is developed based on this definition. Results from experiments and real-world tests show that the developed method meets the demanded requirements.

Speaker: Dr. Lorena Gutiérrez-Madroñal
Venue: MB564
Time: 14:00 - 15:00

7th September 2017 - Prof. Krishnamachar Sreenivasan

Flow in Computer Systems

New lines of attack are required to design computers efficiently in view of rapid advances in VLSI technology and increases in software complexity. This note presents a novel approach of portraying computer job execution as multi-phase single stream of instruction, data, and control competing for hardware and software resources. Fluid flow methods, though appealing, are limited to narrow range of problems. A theoretical treatment based on stochastic processes, complemented by measurements is envisaged. Job flow is frequently unstable and understanding factors that lead to flow instability is essential to prevent frequent, annoying, occurrence of ‘Blue screen’, a state, this analysis finds, is caused by eight factors. A job flow Reynolds number, R, is defined as ratio of factors aiding job execution to factors staunching job execution. Application layers frequently employed to increase programmer productivity and advances in computer design meant to exploit Instruction Level Parallelism are found to decrease program performance. Online Transaction Processing workload is stable for values of R, greater than 0.8. Online Transaction Processing Workload was used in controlled experiments to collect five sets of results in which the single processor architectural speed varied from 3.10 to 3.41 cycles per instruction.

Speaker: Prof. Krishnamachar Sreenivasan
Venue: MB146
Time: 15:00 - 16:00

9th May 2017 - Dr. Florian Steinberg

Introduction to second order complexity theory

Classical computability and complexity theory use Turing machines as a foundation for consideration of effectivity (computability) and efficiency (polynomial-time computability) of operations on countable discrete structures. For many applications in engineering it would be desirable to not only compute on countable discrete structures but also to compute on continuous structures like the real numbers. The most common model for computation on the real numbers by digital computers are floating point computations. Modeling real numbers by machine numbers is unsatisfactory from a mathematical point of view as the content of a proof of correctness of a mathematical algorithm is usually completely lost during an implementation. A mathematical rigorous and realistic model for computation on continuous structures is provided by computable analysis. The finite strings as codes for elements of a structure are replaced by total string functions that provide on demand information about the element they encode. Functions between encoded spaces can then be computed by operating on Baire space: The space of all total string functions. Computation on Baire space is done using oracle Turing machines. Intuitively, oracle Turing machines correspond to programs with function calls. It is not a priory clear which of these programs should be considered fast. We introduce the accepted class of polynomial-time computable operators on Baire space, i.e. we specify what programs with function calls should be considered efficient. The definition resembles classical polynomial-time computability very closely (due to work by Kapron and Cook). However, there are some important differences that we investigate in some detail.

Speaker: Dr. Florian Steinberg
Venue: MB108
Time: 11:00 - 12:30

28th March 2017 - Dr. Joao Carreira

Understanding people in videos

The problem of “understanding” people in videos has been a long-standing central challenge in computer vision and artificial intelligence. In this talk I will first discuss recent technical advances in human pose estimation using models that make iterative passes through convolutional networks. I will then describe a novel (sorely needed) dataset for human action recognition, gathered from YouTube, having an order of magnitude more videos than existing datasets. I will show that this dataset enables a new type of spatiotemporal models which obtain results considerably above the state-of-the-art on popular benchmarks.

Speaker: Dr. Joao Carreira
Venue: MB220
Time: 14:00 - 15:00

21st March 2017 - Dr. Leandro Minku

Online Ensemble Learning of Data Streams with Gradually Evolved Classes

In machine learning, class evolution is the phenomenon of class emergence and disappearance. It is likely to occur in many data stream problems, which are problems where additional training data become available over time. For example, in the problem of classifying tweets according to their topic, new topics may emerge over time, and certain topics may become unpopular and not discussed anymore. Therefore, class evolution is an important research topic in the area of learning data streams. Existing work implicitly regards class evolution as an abrupt change. However, in many real world problems, classes emerge or disappear gradually. This gives rise to extra challenges, such as non-stationary imbalance ratios between the different classes in the problem. In this talk, I will present an ensemble approach able to deal with gradually evolved classes. In order to quickly adjust to class evolution, the ensemble maintains a base learner for each class and dynamically creates, updates and (de)activates base learners whenever new training data become available. It also uses a dynamic undersampling technique in order to deal with the non-stationary class imbalance present in this type of problem. Empirical studies demonstrate the effectiveness of the proposed approach in various class evolution scenarios in comparison with existing class evolution approaches.

Speaker: Dr. Leandro Minku
Venue: MB220
Time: 14:00 - 15:00

14th March 2017 - Dr. Mike Joy and Dr. Meurig Beynon

CONSTRUIT!

CONSTRUIT! is a three year Erasmus+ project, involving six partners led by the University of Warwick and scheduled for completion in August 2017, on the theme of "Making construals as a new digital skill for creating interactive open educational resources".

Where programs reflect the practices of a mind following rules, construals are artefacts developed by a mind making sense of a situation. In this respect, a construal is well-matched to the unconventional role that Seymour Papert had in mind for programs in his constructionist approach to learning: that of objects-to-think-with whose construction obliges the learner to reflect on the basis for their knowledge and in that process enrich their domain understanding.

This seminar will discuss the significance of making construals in relation to three research topics motivated by Papert's work:

  • principles and practices that support constructionism;
  • objects-to-think-with as a basis for novel learning resources;
  • exploiting constructionist principles in the classroom and curriculum.
It will take the form of a reflective illustrated account of the work that has been done towards developing, deploying and evaluating techniques and environments for making construals in the course of CONSTRUIT!.

Speakers: Dr. Mike Joy and Dr. Meurig Beynon
Venue: MB220
Time: 14:00 - 15:00

7th March 2017 - Dr. William Langdon

Long-Term Evolution in Genetic Programming

We evolve 6-mux populations of genetic programming binary Boolean trees for up to 100,000 generations. As there is no bloat control, programs with more than a hundred million nodes may be created by crossover. These are by far the largest programs yet evolved. Our unbounded Long-Term Evolution Experiment LTEE GP appears not to evolve building blocks but does suggests a limit to bloat.

We do see periods of tens even hundreds of generations where the whole population is functionally converged. In contrast to wetware LTEE experiments with bacteria (genome 4.6 million base pairs in length and 66000 generations), we do not see continual innovation, but instead although each tree in the population may be different, they all have the same phenotype (in that they can all solve the multiplexor benchmark) and the code next to the tree's root becomes highly stable.

We test theory about the distribution of tree sizes. Surprisingly in real finite populations with typical GP tournament selection we do see deviations from crossover only theoretical predictions.

Speaker: Dr. William Langdon
Venue: MB220
Time: 14:00 - 15:00

21st February 2017 - Eike Neumann

Representations for feasibly approximable functions

Given a continuous real function, two of the most basic computational tasks are the computation of its integral and its range. Both problems are generally perceived to be ``easy'' by practitioners (given that the domain is one-dimensional). Hence it was surprising when Ko and Friedman in 1982 proved that these problems are #P-hard and NP-hard, respectively.

Our hypothesis is that this discrepancy is due to the fact that complexity theorists use the simplest natural representation of continuous real functions, which treats all polytime computable functions equally. Practitioners, on the other hand, use representations which are biased towards a small class of functions that typically occur in practice. We evaluate this hypothesis using both theoretical and practical tools.

Building on work by Labhalla, Lombardi, and Moutai (2001) and Kawamura, Mueller, Roesnick, and Ziegler (2015), we review several common admissible representations of continuous real functions, study their polynomial-time reducibility lattice and benchmark their performance using the AERN Haskell library for exact real computation.

We include the standard continuous function representation used in computational complexity theory where all polytime computable functions are polytime representable.

The other representations we study are all based on rigorous approximations by polynomials or rational functions with dyadic coefficients. In these representations maximisation and integration are feasibly computable but not all polytime computable functions are polytime representable.

We show that the representation by piecewise-polynomial approximations is equivalent to the representation by rational function approximations with respect to polynomial-time reducibility.

These two representations seem to form a sweet spot regarding the trade-off between the ability to feasibly represent a large number of functions and the ability to feasibly compute operations such as integration and maximisation.

Speaker: Eike Neumann
Venue: MB220
Time: 14:00 - 15:00

14th February 2017 - Dr. Anakreontas Mentis

Productivity tools for a legacy interpreted programming language

Phoebus Software Ltd is a leading provider of software for the management of lending and savings at financial institutions. Phoebus has been able to produce high quality reliable software fast with the help of their in-house programming language called P4. P4 has features for rapid development of complex form-based database-backed applications. However, P4 was designed 20 years ago and lacks tools present in modern programming languages such as code checkers and IDEs. Moreover, P4 is interpreted and supports code changes on-the-fly when deployed. This dynamic nature of the language has become an obstacle as the code base has grown very large. We describe how we improved the definition of the P4 language and produced a validator that, when integrated with an IDE, identifies various classes of programming defects while editing a P4 program. In particular, we have added a type system to P4 and defined finite-state models for the database interaction. We also give an overview of the technology used under the hood, namely the Haskell functional programming language, the Parsec parser library and the Hoopl library for control flow analysis.

Speaker: Dr. Anakreontas Mentis
Venue: MB220
Time: 14:00 - 15:00

7th February 2017 - Dr. Dimitris Kolovos

Towards Scalable Model-Driven Engineering

Model-Driven Engineering (MDE) is a software engineering approach that promotes domain-specific models as first-class artefacts of the software development and maintenance lifecycle. As MDE is increasingly used for the development of larger and more complex software systems, the current generation of modelling and model management technologies are being pushed to their limits.

In this talk I will provide an overview of some of the most important scalability challenges that manifest when working with large (collections of) domain-specific models. I will then go through ongoing work that attempts to address these challenges by providing support for parallel and reactive code generation, partial model loading, and model indexing.

Speaker: Dr. Dimitris Kolovos
Venue: MB220
Time: 14:00 - 15:00

24th January 2017 - Raghavendra Raj

Business Intelligence Solution for an SME: a Case Study

Business Intelligence (BI) leverages the usefulness of existing information. It equips business users with relevant information to perform various analyses to make key business decisions. Over the last two decades, BI has become a core strategy for the growth of many companies, in particular large corporations. However, studies show that small and medium-sized enterprises (SMEs) lag behind in implementation and exploitation of BI solutions. To stay ahead of the competition, SMEs must be able to monitor and effectively use all of their resources, in particular information resources, to assist them in making important business decisions. We have examined the challenges such as lack of technical expertise and limited budget when implementing a BI solution within an SME in the UK. In light of our experiences in tackling these issues, this seminar discusses how these challenges can be overcome through applying various tools and strategies and the potential benefits.

Speaker: Raghavendra Raj
Venue: MB220
Time: 14:00 - 15:00

20th January 2017 - Dr. Stephen Marsh

Slow Computing, Wisdom, and ideas for Comfort-able Answers to Fake News

Remember Flash Crashes? Computing is fast, by default. That's good, but there are times when it does to slow down to the speed of thought and consider what the fast decisions might result in, not far down the line. More, it behooves us to think more about the people in the system, and how they can help the system be 'more'. This idea, the concept of Slow Computing, grew from discussions at Dagstuhl about a year ago, and gradually began to contribute to explorations of Wisdom in computational systems. Wisdom, the capacity for contextually guided rational and correct thought in unfamiliar situations, seems exactly the kind of thing we need to bring our computational systems into the human world, where they are going to have to be. This talk presents our thoughts and research on Slow Computing and Wisdom before diving into the related concepts of Device Comfort and Computational Trust, and ends with a look at how thinking more slowly and integrating and comfort and trust reasoning into information systems might just help us in some of the more pressing challenges of social media.

Speaker: Dr. Stephen Marsh
Venue: MB146
Time: 14:00 - 15:00

29th November 2016 - Dr. Yulan He

Unsupervised Event Extraction and Storyline Generation from Text

This talk consists of two parts. In the first part, I will present our proposed Latent Event and Categorisation Model (LECM) which is an unsupervised Bayesian model for the extraction of structured representations of events from Twitter without the use of any labelled data. The extracted events are automatically clustered into coherence event type groups. The proposed framework has been evaluated on over 60 millions tweets and has achieved a precision of 70%, outperforming the state-of-the-art open event extraction system by nearly 6%. The LECM model has been extended to jointly modelling event extraction and visualisation in which each event is modelled as a joint distribution over named entities, a date, a location and event-related keywords. Moreover, both tweets and event instances are associated with coordinates in the visualization space. Experimental results show that the proposed approach performs remarkably better than both the state-of-the-art event extraction method and a pipeline approach for event extraction and visualization.

In the second part of my talk, I will present a non-parametric generative model to extract structured representations and evolution patterns of storylines simultaneously. In the model, each storyline is modelled as a joint distribution over some locations, organizations, persons, keywords and a set of topics. We further combine this model with the Chinese restaurant process so that the number of storylines can be determined automatically without human intervention. The proposed model has been evaluated on three news corpora and the experimental results show that it generates coherent storylines from new articles.

Speaker: Dr. Yulan He
Venue: MB404A
Time: 14:00 - 15:00

15th November 2016 - Dr. David Sanderson

Advanced Manufacturing: An Application Domain for Adaptive Systems Research

This talk will discuss manufacturing as an application domain and some of the research being done at the Institute for Advanced Manufacturing at the University of Nottingham. The talk will be grounded in real demonstration scenarios designed to address industrial problems. Particular detail will be given to the adaptive agent-based architectural concept and an approach for determining the realisability (or manufacturability) of products in a "batch-size-of-one" situation, where each product being made in a system may be unique.

Speaker: Dr. David Sanderson
Venue: MB404A
Time: 14:00 - 15:00

11th October 2016 - Dr. Antonio Garcia-Dominguez

From linked files to NoSQL graphs: analysis of Eclipse projects

Hawk [1] is an indexing solution that can monitor collections of structured files, mirror them into typed graphs, and query them in an efficient and concise way. Nodes can be indexed by attribute values, and types can be extended with derived attributes and edges depending on the queries to be done.

Hawk has been recently extended with the capability for reading the metadata that links Eclipse plugins together and groups them into high-level projects. In this talk, I will introduce the concepts behind Hawk and discuss the state of our current studies on the eclipse.org codebase. I am looking for feedback on our current approach and pointers to structural pattern recognition approaches that may be useful for this software repository mining problem.

[1]: https://github.com/mondo-project/mondo-hawk

Speaker: Dr. Antonio Garcia-Dominguez
Venue: MB404A
Time: 14:00 - 15:00