Kick-Off meeting of the VELaSSCo project: A FP7 project of the EC, coordinated by CIMNE, for developing the most advanced visualization tools to deal with huge amounts of simulation data

On Tuesday 14th January, a group of European experts on computing engineering applications attended the kick-off meeting to officially launch the VELaSSCo project.

VELaSSCo, “Visual Analysis for Extremely Large-Scale Scientific Computing”, is a STREP Collaborative project within the FP7-ICT programme of the European Union. The Vision of VELaSSCo is to provide new visual analysis methods for large-scale simulations serving the petabyte era and preparing the exabyte era by adopting Big Data tools and architectures for the engineering and scientific community leveraging new ways of in-situ processing for data analytics and hardware accelerated interactive visualization. That is, to provide Big Data tools for the engineering and scientific community in order to better manipulate simulations with billions of records in an easier and faster way.

The VELaSSCo consortium, coordinated by CIMNE, includes three research groups with large expertise in scalable algorithms for numerical methods applications, its visualization and a long track record coordinating and participating in EC projects in this field (the School of Engineering from the University of Edinburgh, the Fraunhofer-Institut für Graphische Datenverarbeitung and CIMNE), two large research organisations leading previous Big Data projects (Stiftelsen SINTEF and the Institut national de recherche en informatique et en automatique), with experience as end-users and developers of commercial codes requiring intensive computation, an SME vendor of database management systems for engineering applications (JOTNE) and a major IT provider (ATOS) with a proven track record in Big Data commercial and research projects.

The budget for the three years of duration proejct is about 6,5 million euro, having received a funding by the EU of 5 million euro.

About Big Data
Well-known companies such as Amazon, Facebook or Google have popularised the “Big Data” technology by proving the value of being efficient working with huge amounts of data. This information includes databases of products, customers and providers, when talking about the largest online retailer in the world; photos, messages and gossips, in the case of the famous social network; and collection of maps, websites, photos or word, to mention some of the Californian company. In all these cases, the data has progressively grown to the incredible size of petabytes mainly due to an enormous number of users. However, the coupling between these data is very low, if any.

Since the beginning of the computers, the simulation world has demanded the most powerful computers to work with so many data as possible, in order to represent the most complex problems in science and engineering with the maximum accuracy. Scientific data and, in particular, simulation data is intrinsically “big”, always limited only by the computational resources. Oppositely to the above mentioned traditional “Big Data”, simulation data is big mainly due to the nature of the problem, not by the number of users, and the coupling of the data is usually very strong.