Research Projects

Model-based Framework for Dependable Sensing and Actuation in Intelligent Decentralized IoT Systems

BRAIN-IoT provides a model-based framework for dynamic composability and deployment of heterogeneous IoT platforms, overcoming the challenge of cross-platform interoperability. The framework provides the capabilities to model smart behaviors in relevant IoT domains and to develop secure distributed and intelligent applications.

Furthermore, BRAIN-IoT establishes a methodology that supports secure, smart, autonomous, and cooperative behaviors of populations of heterogeneous IoT platforms. The project will implement open semantic models enabling interoperability, data exchange and control features, as well as privacy enforcing mechanisms and data ownership policies. BRAIN-IoT will also offer model-based tools easing the development of innovative, tightly integrated IoT and CPS solutions.

Results of interest to Modelia include the IoT Modeling Language (IoT-ML) and its modeling tool. IoT-ML, based on standard MARTE and SysML, focuses not only on structural concepts of an IoT design, but adds the capacity to model semantics, capabilities, and smart behaviors of the devices. For behaviors modeling, we support both classical event-based smart behaviors specification and machine learning characterization (e.g., hyperparameters). The modeling tool of IoT-ML has features such as automatic code generation (e.g., Python neural network skeleton code), Web of Things (WoT) Thing Description (TD) generation, and OSGi MANIFEST generation. This way, the deployment of a new solution in the cloud is facilitated. The modeling tool also uses the Eclipse sensiNact platform to communicate with devices in order to control and monitor their states within the design model (Model@Runtime).

Project website:


Core Members: ISMBCEAUniversity of Grenoble AlpesParemusST Microelectronics GrenobleSiemens AktiengesellschaftEclipse Foundation EuropeIDATE,Airbus Cyber SecurityRobotnik AutomationEMALCSAImproving Metrics


Funding institutions: European Commission R&I (Call: H2020-IoT-2016-2017, Topic: IoT-03-2017)


Standardization of Uncertainty for Model-based Software Engineering

This project deals with the specification and modeling of uncertainty in systems and/or software engineering. In particular, it aims to provide a solution for the OMG’s Request For Proposals: Precise Semantics for Uncertainty Modeling. As part of this project, we try to precisely define the term uncertainty in the field of MBSE and, as the RFP solicits, “identify uncertainties explicitly, analyze them, and make an effort to deal with them in all phases of development.”

Core Members: Bran Selic, Antonio Vallecillo, CEA


Smart Model Autocompletion

The aim of this project is to provide modeling languages with features for both syntactic and semantic completion, i.e., we plan to provide modeling tools with mechanisms able to assist UML/OCL users in their development by suggesting meaningful alternatives on how to complete their models under development.

Core Members: Loli Burgueño, Jordi Cabot, Sébastien Gérard


Funding institutions: CEA, UOC


Log analysis for modeling tools

Modeling tools could learn from the way a certain user employs the tool and based on that learning process adapt the interface or suggest personalized recommendations to the user.

Core Members: Maxime Savary-Leblanc, Xavier Le Pallec, Sébastien Gérard


Funding institutions: CEA, Hauts-de-France Region



CAESAR stands for Computer Aided Engineering for Systems ARchitecture. It is a software platform that supports the transformformation of Systems Engineering into a rigorous, integrated, and model-centric engineering practice. CAESAR supports the definition of a rigorous, multi-disciplinary, and tool-neutral methodology for Systems Engineering (SE). It adopts the semantic web approach for representing a system model as a set of ontologies. It also enables methodology-driven authoring of the system model using a set of federated system design tools, provides a semantic data wareouse for continuously integrating a set of model fragments into a unified system model, automates the analysis of the system model using a variety of analysis tools that inspect the model using ontology-based API and queries, faciliates the synchronization of the system model fragments by proposing changes based on insights gained from analyzing the system model, faciliates the review of the system model status by generating gate products and organizing them into easy to browse dashboards, and configuration manages the system model and protects its baseline using a change request process that allows peer-review, comments and approvals.

Project website:


Core Members: Maged Elaasar

Past Projects

Image Recognition by Deep Learning Applied to Smart Modeling

Deep learning is one of the most hyped AI methods today. Supported by neural networks, deep learning solves problems through a self-configuration process by training. The efficiency of deep learning for image recognition is proven and this technique is used by Google Images and autonomous vehicles.

During the development of a system, visuals are one of the most efficient means to exchange ideas. Often these ideas are only graphically represented, with more or less formalism. Because of this incertitude, it takes time to consolidate ideas as a correct and re-usable model. Furthermore, the poor interoperability between tools and languages slows down model exchanges. Within this project, we have developed a tool to automatically rebuild a UML model based on a picture of a diagram.

Our approach is based on real-time objects detection neural networks and Optical Character Recognition (OCR). For OCR, we use Google Tesseract based on neural network. For objects detection, we have compared results of a PyTorch-implementation of SSD and a TensorFlow-implementation of YOLO. SSD was chosen because results show a precision of more than 90% for class shapes detection, based on the same training set for both networks. Surprisingly, while training with computer-generated diagrams, our tool is also able to detect hand-drawn diagrams.

One main challenge we faced for the training process was the lack of UML diagram data sets. Therefore this project has also produced a data set of about 1500 UML class diagrams where nodes and edges have their coordinates and natures labeled. Images were taken from the UML in OSS project and Google Images. As the framework is now developed and re-usable, currently we are working on providing more training sets to improve the precision of shapes, arrows, and text detection. Some ideas include using Generative Adversarial Networks to generate realistic UML diagrams. We plan to open up the detection to UML diagrams other than the class diagram. We are also working on smart model reconstruction. Indeed, some elements of a semantic model are not always shown on a notation model. A rebuilt model may also need to be integrated within an existing model. We are exploring SVM and K-means clustering solutions for such problems.

Core Members : Shuai Li, Zhihao Lyu (Master’s intern at CEA) and Qing Zhu (Master’s intern at CEA)


Funding institutions: CEA