Doha: Together with the Supreme Committee for Delivery and Legacy (SC), Qatar University (QU) College of Engineering has developed an intelligent crowd management and control system, comprising several components to crowd counting, facial recognition and abnormal event detection (AED).
The QU research team, led by Professor Sumaya Al Maadeed as the study’s Principal Investigator, includes Dr. Noor Al Maadeed, Associate Dean of Graduate Studies for Academic Affairs and Associate Professor of Computer Engineering, Dr. Khalid Abualsaud, Lecturer in Computer Engineering, Professor Amr Mohamed, Professor of Computer Engineering, Professor Tamer Khattab, Professor of Electrical Engineering and Acting Director of the Center of Excellence in Teaching and Learning, Dr. Yassine Himeur, Post Researcher -doctoral, and Dr. Omar Elharrouss. Post-doctoral researcher, Najmath Ottakath, master’s student.
The safety and security of players, spectators and others associated with the FIFA World Cup Qatar 2022 is the focus of the Organizing Committee. Typically, the security risks are multiplied by many, given the scale of the event and the large number of fans expected (over 1.5 million fans). Thus, the security of the FIFA World Cup Qatar 2022 is a challenge due to the increasing number of possible threats and the use of technology.
Crowd management at World Cup stadiums and their perimeters is crucial to ensuring the safety and smooth flow of World Cup events due to the inherent crowd occlusion and density inside and outside the stadiums. Qatar 2022 will rely on the deployment of advanced technologies, such as surveillance drones, ICT and AI, to optimize crowd management.
In this regard, the QU research team first developed a crowd counting system from drone data, which exploits the expanded and scaled neural networks to extract relevant features and estimates of crowd density.
In addition, a new dataset for crowd counting in sports facilities named Football Supporters Crowd Dataset (FSC-Set) is introduced. It includes 6,000 hand-tagged images depicting different types of scenes, containing thousands of people gathering in or around stadiums.
The research team’s effort also focused on developing a facial recognition system, which considers faces under pose variations using a multitasking convolutional neural network (CNN). More precisely, a cascading structure was used to combine a pose estimation approach and a face identification module. The CNN-based pose estimation approach was trained on three categories of face images, including left, frontal, and right side captures.
Then, three CNN architectures, namely VGG-16+left PReLU, front VGG-16+PReLU, and right VGG-16+PReLU, were deployed to identify faces based on the estimated pose. Additionally, a skin-based facial segmentation scheme based on structure-texture decomposition and color-invariant description was introduced to remove unnecessary facial information (e.g., background content). Empirical evaluations were conducted on four popular facial recognition datasets, where the proposed system outperformed the associated state-of-the-art schemes.
Recently, thanks to drone video surveillance, abnormal event detection (AED) is gaining increasing attention due to its reliability and cost-effectiveness. Typically, drones equipped with cameras can detect violent behavior in crowds at sporting events. They can monitor crowds around stadiums and/or other public places during the World Cup.
To this end, the research team, led by Professor Al Maadeed, has developed a new DEA, which aims to learn abnormal actions using both normal and abnormal segments. It avoids the annotation of anomalous events in the training video sequences in order to reduce the computational cost and thus to be easily implemented on drones. Therefore, anomalous events are learned using an in-depth multiple instance ranking system, which leverages weakly annotated training video footage. Simply put, training annotations are placed on entire videos instead of specific clips.