A powerful solution which hears, sees and thinks like a person.
Understanding video content is much more than just detecting elements. Our neural net based technology comprehends video and creates semantic, time dependant and context relevant analysis.
Metaliquid is a computer vision and audition deep learning solution focused on the needs of the evolving video and media market.
We offer a range of services to leverage the analysis and the value of information extracted from video live feeds and digital video assets.
AI powered video content comprehension.
We apply neural networks and deep learning technology to analyze video and audio content.
Our technology is able to process videos and identify thousands of concepts in real-time.
Thousands of concepts identified in real time.
Our solution is able to recognize people, objects, landscapes, places, categories, mood and analyze the semantic relation between them.
We developed Metaliquid core to answer questions related to content and understand video as a human being:
- Who are the people / talents appearing in the scene?
- Which is the setting?
- Which objects appear in each scene?
- What is happening in the scene?
- How do characters and elements interact?
- Which is the nature of the interaction?
- What is the sentiment of each scene and what are the viewers related feelings?
Semantic content analysis is key to deliver premium viewer experiences.
Digital video content is mostly unexplored thus representing a huge and high value black box. AI powered content analysis and discovery is the answer.
Video consumption is skyrocketing, live video is growing rapidly and the media market landscape is evolving at a quick pace. VOD and OTT business models have to adopt solutions to leverage their large- then-ever content libraries in order to offer tailored experiences to their viewers.
Thanks to Metaliquid advanced analytics our customers can extract valuable information from their videos and offer improved premium services to their audiences.
- Content discovery and analysis
- Live experience and Real time monitoring
- Video content / asset management
- Relevant and contextual advertising
The information extracted from the video is used to create an overall time dependent semantic graph which illustrates the data flow dinamically.
The data is available in a machine readable format which can be easily shared and integrated in the customer's workflow.
We're constantly training our neural nets to recognize and understand new concepts with high quality datasets.
Neural nets technology to refine and augment the capabilities of our services. Everyday.
Our technology core to has been designed to be agile and extremely scalable. We can train our neural nets to recognize new concepts according to clients' needs.
Metaliquid core architecture is composed by 4 components:
An ontology module managing the taxonomies used in the comprehension and description of the video and the related datasets used for training the neural networks.
deep learning processing
A deep learning module dedicated to training the neural networks on the defined datasets.
A core processing module responsible for elaborating in real time the contents and for producing the time-dependent semantic graph containing the description of what’s happening in the video.
A services module responsible for providing the semantic graph and sharing the data with the clients systems.
Metaliquid's core is designed to deliver high performance and it can scale easily when new taxonomies are introduced. We're constantly improving our dictionary to teach new concepts to our neural network.