Computational geometry is a branch of Algorithms, with emphsasis on geometric problems. In this area, we explore new techniques for geometric range searching, and related problems.
Multidimensional data comprises the bulk of data in statistical and scientific databases, Geographic Information Systems and multimedia databases. Our focus is on the fundamental complexity of multidimensional data processing, in the external-memory and distributed setting. We are also investigating techniques for the simulation of quantum computers via high-performance scientific computing.
Many problems arise in non-traditional types of database systems, such as distributed, peer-to-peer and stream databases. We are particularly concerned with searching and query processing in the context of peer-to-peer databases.
In order facilitate technology transfer to the industry, we are complementing our basic research with the development of distributed information systems, particularly Geographic Information Systems and Grids, for novel applications, such as geospatial wikis and WebGIS systems for decision support.
The massive volume of the data of an organization imposes great difficulties on efficiently processing it, as required by decision support systems. Our research focuses on the creation and optimization of data structures and techniques for the computation, storage and indexing of different views of the collected data, in a way that seeks to achieve fast response to complex queries.
Recent technological advances have allowed the use of large scale sensor networks in a variety of applications, such as healthcare, traffic monitoring, agriculture, area and production monitoring. The data in these applications are continuously generated, thus creating data streams that must be processed, cleaned and correlated effectively using appropriate in-network techniques, so as to achieve the goals of the application.
Processing huge amounts of data in data mining and decision support systems, along with the requirements of small query response times in these applications, makes imperative the need for developing approximate processing techniques that can quickly, and fairly accurately, answer such queries.
Several real-world applications need to effectively manage and reason about large amounts of data that are inherently uncertain. For instance, pervasive computing applications must constantly reason about volumes of noisy sensor/RFID readings for a variety of purposes, including motion prediction and human behavior modeling. Thus, there is a growing realization that uncertain and probabilistic information should be treated as a "first-class citizen" in modern data-management systems. Our research focuses on novel data models, data-management system architectures, as well as efficient algorithms for effectively processing and analyzing massive probabilistic data sets.
Global side column en.