Avatar

Cagri Ozcinar

Senior Engineer

Samsung Electronics

Biography

You can contact me via this page, email (cagriozcinar {at} gmail {dot} com) / LinkedIn/ GitHub, etc.

Cagri Ozcinar is a senior research engineer at Samsung Research Institute UK (SRUK). Before that, he was a research fellow within the V-SENSE project at School of Computer Science and Statistics (SCSS)/Trinity College Dublin (TCD), Ireland. Before he joined the V-SENSE team, he was a post-doctoral fellow in the Multimedia group at Institut Mines-Telecom Telecom ParisTech, Paris, France. Cagri received the MSc (Hons.) and the PhD degrees in electronic engineering from the University of Surrey, UK, in 2010 and 2015, respectively. His current research interests include image/video processing, coding/streaming, computer vision, and machine learning/deep learning for media technologies.

Interests

  • Computer Vision
  • Deep Learning
  • Visual Attention
  • Media coding/streaming
  • Immersive Media
  • VR/AR

Education

  • PhD in Multiview video communication, 2015

    University of Surrey

  • MSc in Multimedia signal processing and communication, 2010

    University of Surrey

Experience

 
 
 
 
 

Senior Engineer

Samsung Electronics

Mar 2020 – Present Staines, UK
Responsibilities include: * Deep Learning * Machine Learning * AutoML * Computer Vision
 
 
 
 
 

Research Fellow

Trinity College Dublin - SCSS - V-SENSE

Jul 2016 – Feb 2020 Dublin, Ireland
Responsibilities include: * Video Communication * Saliency/Visual Attention * VR/AR
 
 
 
 
 

Postdoctoral Researcher

Télécom ParisTech – Département IDS - Groupe Multimédia

May 2015 – May 2016 Paris, France
Responsibilities include: * High Dynamic Range Imaging * Tone mapping
 
 
 
 
 

Researcher

University of Surrey - CVSSP - I-LAB

Oct 2010 – May 2015 Guildford, UK
Responsibilities include: * Multiview video coding and streaming * View synthesis * Peer-to-peer networking * QoE

Research

Organizor

  • Special Session Special Session on Perceptual Analysis and Representations for Immersive Imaging, MMSP 2020.

  • Tutorial Immersive Imaging Technologies: from Capture to Display, ICME 2020.

  • Special Session Special Session on Learning-based Visual QoE Estimation Methods, QoMEX 2020.

  • Special Session Special Session on Recent Advances in Immersive Imaging Technologies, ICME 2020.

  • Special Session Special Session on Recent Advances in Immersive Imaging Technologies, ICIP 2019.

  • Special Session Special Session on Recent Advances in Immersive Imaging Technologies, EUSIPCO 2018.

  • Guest Editor Special Issue on Applications of Visual Analysis of Human Behaviour in EURASIP Journal on Image and Video Processing.

  • Challenge and Workshop Challenge on Dominant and Complementary Emotion Recognition Using Micro Emotion Features and Head-Pose Estimation, FG 2017.

Code and Data

  • Dataset and Tools for the paper of “Do Users Behave Similarly in VR? Investigation of the Influence on the System Design”, ACM TOMM 2020.
  • Tools for “Voronoi-based Objective Quality Metrics for Omnidirectional Video”, QoMEX 2019.
  • Source code for the paper of “Super-resolution of Omnidirectional Images Using Adversarial Learning”, MMSP 2019.
  • Dataset for the paper of “Towards generating ambisonics using audio-visual cue for virtual reality”, ICASSP 2019.
  • Tools for the paper of “Visual Attention-Aware Omnidirectional Video Streaming Using Optimal Tiles for Virtual Reality”, JETCAST 2019.
  • Dataset for the paper of “Visual Attention in Omnidirectional Video for Virtual Reality Applications”, QoMEX 2018.

Contact