Invited Speakers
Abstract
Human vision can execute multiple vision tasks without much effort, and with great efficiency. The objective of computer vision is to enable computers to perceive and comprehend the world around them, conceivably better than humans do. Thanks to recent advances in AI and machine learning, this objective has become a reality.
In this talk, we will give you a brief overview of classical techniques in computer vision along with modern trends and challenges in this field. This talk will cover classical computer vision pipeline which includes feature extraction followed by conventional machine learning as well as the deep learning approaches. We will also cover challenging applications of computer vision including aerial vision and agri-vision. This talk is beneficial to graduate students and young researchers with interest in machine learning and computer vision.
Bio
Dr. Ahmar Rashid enjoys about Nineteen years of industrial as well as research and development experience. His core research work includes the performance analysis of optimization algorithms for static as well as dynamic image reconstruction using electrical impedance tomography (EIT) with application in industrial process monitoring and medical imaging.
His research interests include machine learning, robotic, vision, application and analysis of evolutional algorithms to solve the static as well as dynamic optimization problems, design and implementation of novel algorithms and techniques in the field of genomics. As an in-charge of Aerial Robotics and Vision Research Lab, he is working towards development of novel algorithms to induce intelligence in teams of autonomous flying robots, with applications in surveillance and monitoring, precision agriculture, aerial mapping, target tracking and analysis, etc. In collaboration with other colleagues, much of his research work has been published in international journals of high repute.
Abstract
Reconfigurable Intelligent Surfaces (RIS) have recently been used in wireless networks to create dynamic radio environments and control signal propagation. In this talk, we will show how an RIS panel can be used to improve bi-directional communications. Further, the RIS panel is equipped with solar panel that harvests energy to be used to power the RIS panel’s smart controller and reflecting elements, hence reducing the need for power and introducing flexibility in locating the RIS panel. A novel framework will be introduced to optimally decide the transmit power of each user and the number of elements that will be used to reflect the signal of any two communicating pair in the system (user-user or base station-user). An optimization problem is formulated to jointly minimize a scalarized function of the energy of the communicating pair and the RIS panel and to find the optimal number of reflecting elements used by each user. Besides, a more efficient close to the optimal solution is found using Bender decomposition. Simulation results show that the proposed model is capable of delivering the minimum rate of each user even if line-of-sight communication is not achievable.
Bio
Ahmed E. Kamal is a professor and Director of Graduate Education in the Department of Electrical and Computer Engineering at Iowa State University in the USA. He received a B.Sc. (distinction with honors) and an M.Sc. both from Cairo University, Egypt, and an M.A.Sc. and a Ph.D. both from the University of Toronto, Canada, all in Electrical Engineering. He is a Fellow of the IEEE and a senior member of the Association of Computing Machinery. He was an IEEE Communications Society Distinguished Lecturer for 2013 and 2014. Kamal's research interests include cognitive radio networks, optical networks, wireless sensor networks, and performance evaluation. He received the 1993 IEE Hartree Premium for papers published in Computers and Control in IEE Proceedings, and the best paper award of the IEEE Globecom 2008 Symposium on Ad Hoc and Sensors Networks Symposium. He also received the 2016 Outstanding Technical Achievement Award from the Optical Networks Technical Committee of the IEEE Communications Society. Kamal chaired or co-chaired Technical Program Committees of several IEEE sponsored conferences including the Optical Networks and Systems Symposia of the IEEE Globecom 2007 and 2010, the Cognitive Radio and Networks Symposia of the IEEE Globecom 2012 and 2014, and the Access Systems and Networks track of the IEEE International Conference on Communications 2016. He was also the chair of the IEEE Communications Society Technical Committee on Transmission, Access and Optical Systems (TAOS) for 2015 and 2016. He is on the editorial boards of the IEEE Communications Surveys and Tutorials, the Computer Networks journal, and the Optical Switching and Networking journal.
Abstract
Software Define Networking technology has great potential to mitigate security challenges in the Internet of Things network. In this talk, a framework to control intrusion inside the Internet of Things (IoT) network after detecting it with an Intrusion Detection System (IDS) will be discussed. The IDS detects intrusions by examining hosts logs and network traffic. The intrusion detection approaches for this framework can be signature based, anomaly based and machine learning based. The framework takes advantage of Software Define Networking (SDN) controller to enforce new security policies and reconfiguration of firewall and other devices in IoT networks to mitigate intrusions and other security threats.
Bio
Dr Amir Qayyum is working as a Professor and Dean, External Linkages and International Collaboration at Capital University of Science and Technology (CUST), Islamabad. He did his PhD from University of Paris-Sud, France. He is actively involved with Internet Engineering Task Force (IETF) and Internet Corporation for Assigned Names and Numbers (ICANN) in ALAC and RSSAC advisory committees. He also served as the Chair, IEEE Islamabad Section, as well as the Board of Directors of Internet Society (ISOC) Islamabad Chapter. He is the founding director of Center of Research in Networks and Telecom (CoReNeT). He has numerous publications in international conferences and journals. His research interests include wireless networks, software-defined networking, vehicular and mobile ad hoc networks, and sensor networks for health care. He has led several national and international funded research projects as well as worked as the local coordinator of many Erasmus Mundus and Erasmus Plus projects. He is also a founding member of the Board of Directors of Pak France Alumni Network (PFAN). In recognition of his services for research and cultural collaborations with France, he was awarded the medal of "Chevalier dans l'Ordre des Palmes Académiques" by the French government.
Abstract
This talk will provide insights about the evolution of distributed AI over the last decade and how the current technological advancements are leading us to a future where Edge AI will become the main stream AI paradigm. Example projects and applications will highlight advancements in systems, data models and algorithms that have contributed to four distinct waves of AI research and innovation during this evolution in the last decade. The talk will conclude with the challenges and opportunities that are driving the evolution towards Edge AI.
Bio
Ashiq Anjum is a Professor of Distributed Systems at the University of Leicester, UK. His areas of research include data intensive distributed systems, distributed machine learning models and high performance analytics platforms for continuous processing of streaming data. Prof. Anjum has participated in a number of large projects including the EU funded projects on healthcare and medical data analytics, distributed clinical intelligence and integration and iterative genome analytics. He has been investigating large scale distributed systems and analytics platforms for the LHC data in collaboration with CERN Geneva Switzerland for the last fifteen years. He has also been working with aerospace, rail and automobile companies to investigate how infrastructures and services can benefit from real time analytics by intelligently analyzing IoT data streams for accuracy, reliability, safety and capacity. He is closely working with a leading VR provider to commoditise AI driven digital twins and enable real time visualization of data models and distributed algorithms in a Virtual Reality environment.
Abstract
Energy consumption in buildings is affected by various factors including its physical characteristics, the appliances inside, and the outdoor environment, etc. However, occupants’ behaviour that determines the global energy consumption must not be forgotten. In most of the previous works and simulation tools, human behaviour is modelled as occupancy profiles. The future research should focus on detailed occupant behaviour representation, particularly the cognitive, reactive, and deliberative mechanisms. In this talk, I shall present how occupants’ dynamic behaviour is modeled and cosimulated together with the physical aspects of a building and an energy management system will be presented. An approach based on multi-agent systems is developed where physical characteristics of the building, the reactive behaviour that is sensitive to physical data, deliberative behaviour of the occupants, and the building energy management system are cosimulated along with a methodology for parameter tuning in the proposed behavior model. This work opens new perspectives not only in the building simulation and in the validation of energy management systems but also in the representation of buildings in the smart grid where signals can be sent to end users advising them to modulate their consumption.
Bio
Dr. Ayesha Kashif is Assistant Professor CS at Riphah Institute of Computing and Applied Sciences, Lahore Campus, Riphah International University, Pakistan since 2018 with research interests in modelling and simulating inhabitants’ behavior for energy management using multi-agent approach, multi-model co-simulations, serious games, data mining and machine learning. She is an enthusiastic applied industrial researcher on ‘Energy Management’ with 7 years of experience working in industrial projects funded by Electricity Department of France (EDF). She has 3 journal publication, 1 book chapter, 9 international conferences. Dr. Ayesha completed her MPhil from Grenoble Institute of Technology, France in 2010 and PhD Computer Science from University of Grenoble, France in 2014 funded by Electricity Department of France (EDF) in “SUPERBAT” project. She has 4+ years of software engineering professional experience in Pakistan. She also completed post doctorate on “MAEVIA” and “SMART ENERGY” projects funded by Electricity Department of France (EDF) from 2014 till 2016.
Abstract
Data science is known as the science of Data, where analytics can be performed to produce models, but these models (may) miss logical reasoning. Such logical reasoning is needed for the interpretation of results and correlating them to real life. By understanding big data from a process perspective give us a possibility to explore the connection between event. Process mining extracts process models from event logs that explain what is happening in the organization. A dataset representing processes, patterns, and decisions and end-to-end processes helps produce explainable machine learning models. Black box machine learning models lead to biased AI decision-making. AI supports critical decision-making not only for business but also for health care and society. The most significant difference between AI and other decision technologies is that it ‘learns.’ How to make this learning process UNBIASED and Explainable for autonomous decision is the biggest question of today’s world.
Bio
Dr. Faiza is an Assistant Professor in the Data Science research group at the University of Twente, The Netherlands. She has been involved in various research projects from the information management domain, namely privacy and ML, Viable policy development, Learning from Incidents, and Extended Single Window. Moreover, Faiza had worked on numerous ML and process mining-related projects for industry partners, including KPN, Vodaphone, and Philips. Her research revolves around using intelligent event-driven techniques such as process mining and machine learning to improve the efficiency and effectiveness of information compliance while keeping in view normative auditing and Privacy concerns. Previously, she was a Post-Doc researcher and lecturer at the Service, Cybersecurity and Safety research group at Twente University. She also served as a lecturer at Saxion University of Applied sciences, The Netherlands. Faiza has received her Ph.D. degree from Tilburg University, where she worked with the Information systems research group. Her Ph.D. work was mainly focused on utilizing ontological norms and process mining algorithms for compliance checking purposes.
Abstract
Glacial Lake Outburst Floods (GLOFs) are a major hazard in high altitude glaciated regions of northern Pakistan. Depending on the volume and size of the lake, temperature and precipitation, and geomorphological parameters of the terrain, mechanical failures may cause breach in the wall of an ice or moraine-dammed glacial lake. Consequently, a sudden discharge of millions of cubic meters of meltwater and debris can occur in a short time interval, with catastrophic impact on the socioeconomic life of the downstream communities. This talk will be a conversation around how technological interventions can enhance the capacity to detect & monitor glacial lakes in Pakistan and improve our understanding of glacial lake failures.
Bio
Khurram Bhatti is an Associate Professor and Director of Research at the Information Technology University (ITU) Lahore, Pakistan. He is a Marie-Curie Research Fellow (MCF) with a PhD in Computer Engineering and MS in Embedded Systems from the University of Nice-Sophia Antipolis, France. His research interests include technological interventions for climate monitoring, Artificial Intelligence (AI) & Machine Learning, Data Analytics, Information Security and Embedded Systems. He has over 10 years of research experience and 40+ peer-reviewed research articles in international conferences and journals. He is the PI/CoPI of several international research projects. Currently, he has the status of 2021 National Geographic Explorer due to his work on the use of AI for Glacial Lake Outburst Floods (GLOFs) in Pakistan under the "AI for Earth" joint initiative of Microsoft and National Geographic.
Abstract
Machine learning (ML) and AI will play a key role in the development of 6G networks. Network virtualization and network softwarization solutions in 5G networks can support data-driven intelligent and automated networks to some extent and this trend will grow in 5G-advanced networks. Era of network virtualization in 5G leads to the era of smart and intelligent networks of the future. Radio access network algorithms and radio resource management functions can exploit network intelligence to fine tune network parameters to reach close-to-optimal performance in 5G networks. In 6G networks, network intelligence is envisioned to be end-to-end, and air interface is envisioned to be AI-native. The user equipment (UE) devices need to be smarter, environment and context aware, and capable of running ML algorithms. This talk will focus on the main practical challenges in developing machine learning solutions in 5G use cases and emphasize with a case study how deployment of these solutions is much harder in a live network as compared to theoretical performance evaluation. Further, a vision for paradigm shift from AI-as-an-enabler to AI-Native air-interface will be provided for 6G networks.
Bio
M. Majid Butt is a senior research specialist at Nokia Bell Labs, France, and an adjunct Research Professor at Trinity College Dublin, Dublin, Ireland. Prior to that, he has held various positions at the University of Glasgow, U.K., Trinity College Dublin, Ireland, and Fraunhofer HHI, Germany. His current research interests include communication techniques for wireless networks with a focus on radio resource allocation, scheduling algorithms, energy efficiency, and machine learning for RAN. He has authored more than 70 peer-reviewed conference and journal articles, 4 book chapters and filed over 25 patents in these areas. He frequently gives invited and technical tutorial talks on various topics in IEEE conferences including, ICC, Globecom, VTC, etc. Dr. Butt is a recipient of the Marie Curie Alain Bensoussan Post-Doctoral Fellowship from the European Research Consortium for Informatics and Mathematics. He has been an Associate editor for IEEE Communication Magazine, IEEE Open Journal of the Communication Society and IEEE Open Journal of Vehicular Technology.
Abstract
In our daily life, we use various wireless communication technologies ranging from WiFi to Bluetooth, and from Mobile Phone to watching TV. All these technologies use wireless radio spectrum. This wireless radio spectrum is a scarce natural resource and due to fixed spectrum assignment policy, this spectrum is underutilized creating a spectrum scarcity problem. In this talk, we discuss the need of cognitive radio networks (CRN) and dynamic spectrum management paradigm. CRNs are proposed and designed to meet the future wireless radio spectrum needs of communications networks. With CRNs, the underutilized wireless radio spectrum can be efficiently utilized using spectrum related functionalities. We then discuss how this wireless radio spectrum can be efficiently managed through blockchain.
Bio
Mubashir Husain Rehmani (M’14-SM’15) received the B.Eng. degree in computer systems engineering from Mehran University of Engineering and Technology, Jamshoro, Pakistan, in 2004, the M.S. degree from the University of Paris XI, Paris, France, in 2008, and the Ph.D. degree from the University Pierre and Marie Curie, Paris, in 2011. He is currently working as Assistant Lecturer in the Department of Computer Science, Munster Technological University (MTU), Ireland. He has authored/edited a total of eight books. Two books with Springer, two books published by IGI Global, USA, three books published by CRC Press – Taylor and Francis Group, UK, and one book with Wiley, U.K. He was ranked # 1 in all Engineering disciplines in 2017 from Pakistan Council for Science and Technology (PCST), Ministry of Science and Technology, Government of Pakistan. His Accumulative Impact Factor is 597, H-Index = 38, i-10 index = 84, Citations = 6043. He received several best paper awards. He is serving in the editorial board of several top ranked journals including NATURE Scientific Reports, IEEE Communication Surveys and Tutorials, IEEE Transactions on Green Communication and Networking and many others. He has published over 125 peer-reviewed publications in high impact journals/transactions/magazines. He has been selected for inclusion on the annual Highly Cited Researchers™ 2020 and 2021 list from Clarivate. His performance in this context features in the top 1% in the field of Computer Science and Cross Field.
Abstract
Industry 4.0 has revolutionized digitalization across the globe and Pakistan is impacted with potential opportunities towards digital economy. The circular debt in the power sector has risen to 2.5 Trillion PKR and power sector is adopting smart grid technologies before deregulation of the power market through CTBCM (competitive trading bilateral contract market) so that associated challenges may be proactively addressed through indigenous smart grid solutions. I shall present AMI landscape implementation status in Pakistan, which is under initial stages, and efforts are being made to facilitate businesses and investment opportunities where production prices can be competitive in the global market. I shall also present the smart grid integration testing Lab established at WAPDA House, Lahore in order to have seamless interoperability of smart devices and standardization before their integration in the national grid. Moreover, the success of CTBCM model highly depends on the accurate short, medium and long-term electricity demand forecasting without which the projected benefits will become a challenge. Besides this, AMI landscape with real-time information has a huge potential for developing AI and ML based applications ranging from theft detection, energy auditing, assets performance monitoring, smart grid switching to exploit full potential of the national grid to its full capabilities. I shall conclude with the technology roadmap for the digitalization in the Pakistan’s power sector with special focus on IoT driven smart solutions.
Bio
Dr. Muhammad Kashif Shahzad is Chief Technology Officer at Power Information Technology Company (PITC), Ministry of Energy, Govt. of Pakistan. Dr. Shahzad completed his MPhil and PhD in industrial engineering from Grenoble Institute of Technology with distinction and University of Grenoble, France in 2008 and 2012. He completed BSc. Mech. Engineering from UET, Lahore and MS Total Quality Management with distinction from University of Punjab, Pakistan in 1999 and 2006. He also completed Bachelor in Computer Science from AIOU, Islamabad with first position and Gold medal in 2000. Dr. Shazhad has 23+ years of national/international experience in designing, developing and deploying IT enabled applied R&D driven technology solutions with research interest in data models interoperability, Artificial Intelligence and Machine Learning. Dr. Shahzad has published 7 journal papers, 1 book chapter and 23 international conference papers. The key projects completed are industrial engineering chair establishment at INSA de Lyon France with 1 Million EUROS funding for 5 years to combine spare parts demand forecasting with supply planning decisions, large scale EU projects INTEGRATE (E27.7 Million Є, SP1-2012-8, 27 Partners), IMPROVE (35 Million Є, SP8, 32 Partners), IMPLEMENT (34 Million Є, SP7, 36 Partners), USAID sustainable energy for Pakistan (SEP) project (2018-2021) [smart Grid Integration Testing lab, universal data integration layer (UDIL) design specifications, UDIL testing suite and transformer monitoring system (TMS) pilot for PESCO and MEPCO], USEA Business Innovation Partnership (BIP) project (2021-2023) [data center infrastructure planning and cyber security pilot for CTBCM model for Energy sector.
Abstract
Electromagnetic (EM) waves can be manipulated by using metamaterials and metasurfaces to realize flexible control over their propagation, radiation, or scattering. At present, there are several powerful commercially available electromagnetic simulation tools that can model the behavior of EM waves in physical systems. However, these commercial EM tools are limited in their applications by their computational efficiency and memory requirements for electrically large problems. In the first part of this talk, we will review a rather new theoretical approach in EM, termed as “fractional electromagnetics”, which has attracted widespread attention in the recent years motivated by its fundamental importance and the possibility of numerous practical applications in electromagnetic modeling of anisotropic, inhomogeneous, disordered and complex systems. The effectiveness of this fractional approach has been demonstrated by good agreement of calculated results with the full-wave simulations and/or experiments. In the second part of this talk, we will discuss some recent advancements and associated challenges in the design, modeling and physical realization of static and tunable/ reconfigurable electromagnetic metasurfaces for future communication technologies.
Bio
Dr. Muhammad Zubair is currently an Associate Professor and Department Chair of the Department of Electrical Engineering at Information Technology University (ITU), Lahore. He has been a Visiting Assistant Professor at Singapore University of Technology and Design (SUTD). Before joining ITU, he was a Postdoctoral Research Fellow at SUTD-MIT International Design Centre, Singapore. He received his PhD in Computational Electromagnetics (CEM) at Polytechnic University of Turin, Italy. He is the principal author of pioneering book on Electromagnetic Fields and Waves in Fractional Dimensional Space published by Springer, NY. He has contributed over 70 scientific works in journals and conferences of international repute. He has been an Associate Editor of the IEEE Access, and Editorial Board Member for the IET Microwaves, Antennas & Propagation, PLOS One and International Journal of Antennas and Propagation. He has been selected for the URSI Young Scientist Award (YSA) 2021; and has been awarded Punjab Innovation Research Challenge Award (PIRCA) 2021. Dr. Zubair is a Senior Member IEEE, and serves as Secretary of the IEEE AP/MTT/EMC Joint Local (Islamabad) Chapter.
Abstract
Internet of Things (IoT) applications today involve data capture from sensors and devices that are close to the phenomenon being measured, with such data subsequently being transmitted to Cloud data centre for storage, analysis and visualisation. Currently devices used for data capture often differ from those that are used to subsequently carry out analysis on such data. Increasing availability of storage and processing devices closer to the data capture device, perhaps over a one-hop network connection or even directly connected to the IoT device itself, requires more efficient allocation of processing across such edge devices and data centres. Supporting machine learning directly on edge devices also enables support for distributed (federated) learning, enabling user devices to be used directly in the inference or learning process. Scalability in this context needs to consider both cloud resources, data distribution and initial processing on edge resources closer to the user. This talk considers whether a data comms. network can be enhanced using edge resources, and whether a combined use of edge, in-network (in-transit) and cloud data centre resources provide an efficient infrastructure for machine learning and AI. The following questions are addressed in this talk:
Q1. How do we partition machine learning algorithms across Edge-Network-Cloud resources — based on constraints such as privacy capacity and resilience?
Q2: Can machine learning algorithms be adapted based on the characteristics of devices on which they are hosted? What does this mean for stability/ convergence vs. performance?
Bio
Omer F. Rana is Professor of Performance Engineering at Cardiff University, with research interests in high performance distributed computing, data analysis/mining and multi-agent systems. He is also the Dean of International for the Physical Sciences and Engineering College at Cardiff University, responsible for establishing and supporting collaborative links between Cardiff University and other international institutions. He was formerly the deputy director of the Welsh eScience Centre and had the opportunity to interact with a number of computational scientists across Cardiff University and the UK. He is a fellow of Cardiff University's multi-disciplinary "Data Innovation" Research Institute. Rana has contributed to specification and standardisation activities via the Open Grid Forum and worked as a software developer with London-based Marshall Bio-Technology Limited prior to joining Cardiff University, where he developed specialist software to support biotech instrumentation. He contributed to public understanding of science, via the Wellcome Trust funded "Science Line", in collaboration with BBC and Channel 4. Rana holds a PhD in "Neural Computing and Parallel Architectures" from Imperial College (London Univ.), an MSc in Microelectronics (Univ. of Southampton) and a BEng in Information Systems Eng. from Imperial College (London Univ.). He serves on the editorial boards (as Associate Editor) of IEEE Transactions on Parallel and Distributed Systems, (formerly) IEEE Transactions on Cloud Computing, IEEE Cloud Computing magazine and ACM Transactions on Internet Technology. He is a founding-member and associate editor of ACM Transactions on Autonomous & Adaptive Systems.
Abstract
Pandemic conditions are once again in great prominence with the recent situation caused by COVID-19, some of these conditions present feverish states that can be detected by means of mass screening at places of great influx of people. There are available different indirect methods to estimate human body core temperature. Among the available methods to indirectly estimate human body core temperature are the axillar and tympanic thermometers and the infrared measurements of the forehead and the inner canthi of the eye taken from thermal images. Being a febrile state considered of a body core temperature higher than 37.5 °C. This value may differ according to the indirect method used, which can make it difficult to identify febrile cases close to the threshold value, for assisting in this task advanced Artificial Intelligence tools such as Machine Learning (ML) algorithms may be an important aid. The aim of this research is to evaluate which ML technique has the best performance with a certain indirect method of assessing body temperature, considering the reference provided by another method. A total of 140 subjects (mean age of 37±7.1 years old, ranging from 21 to 63, 69 males and 71 females), from which 10 were considered febrile by the axilla thermometer assessment. All were screened with axillar and tympanic thermometers and facial infrared imaging using a thermal camera. Five ML methods were selected for this research: Multilayer Perceptron (MLP), Support Vector Machines (SVM), Naïve Bayes (NB), k-Nearest Neighbor (kNN) and Random Forest (RF). A Python script was developed consisting of six tests: 5 and 10-fold cross section with 20%, 25% and 30% sample testing. When a 20% test size was used, it meant that the ML model was trained with the remaining 80% sample size. From the results at the defined tests for all ML techniques and temperature assessment methods, it can be observed that the worst results are given by MLP for all tests and methods. All the other ML techniques presented good results (accuracy > 85%), being the SVM the technique that presented better results for the inner canthi of the eye and forehead measurement, being closely followed by the NB algorithm. For tympanic measurements, the ML technique that overperformed was the NB followed by the RF. Different fever thresholds may have to be considered, which can make hard the life of assessors (humans or systems), to automatize the process ML techniques may be used to help the burden. This research showed that the ML algorithm performance may differ with the temperature assessment method, MLP based systems should be avoided since there is one unique value of entrance to find an output, techniques such as SVM, NB and RF presented better performance (accuracy > 89%) and are more reliable for this kind of implementation. The knowledge base size, for training the ML models, is also an aspect of importance since values may differ from having a 70 to 80% training sample. Although for the suggested methods this variation is very small.
Bio
Ricardo Vardasca is currently a Coordinator Professor of Computer Science at ISLA Santarém, Integrated Researcher at INEGI-LAETA, Visiting Researcher at University of South Wales (UK), Visiting Professor at the Biomedical Engineering Department of the SRM Institute of Science and Technology in Chennai (India) and external professor at University of Valencia (Spain). In ISLA Santarem he is the director of the MSc in Web Services and Technologies Engineering degree, director of the Post-Graduation in Data Science and the coordinator of the Projects Support Office. He has been awarded with a PhD in Computer Science and a BSc (hons) in Information Technology from the University of South Wales (UK) and hold a BSc in Computer Science Engineering from Polytechnic Institute of Leiria. He is a Fellow of the Royal Photographic Society (UK), where he was also recognized as Accredited Senior Imaging Scientist. He acts since 2015 as General Secretary of the European Association of Thermology and since the same date he is member of the ISO committee on ISO/TC121/SC3-IEC62D/JWG8, Project Team 9 on Human Screening Thermographs standards. Prof. Vardasca is member of the editorial boards of the journals: Thermology International, The Imaging Science Journal, International Journal of E-Health and Medical Communications and Educational Sciences. In the past he has been visiting research in Data Science at Smart Infrastructure Facility at University of Wollongong, Australia with a Marie Curie Grant
Abstract
A conversation around the boom of devices connected to the Internet will take place in this talk. The journey of Internet of Things (IoT); what may we see in the next decade? and what will be the main challenges and opportunities across the emerging field of IoT and interrelated domains of big data and artificial intelligence? These will be some of the main ideas discussed during this talk.
Bio
Samee U. Khan received a PhD in 2007 from the University of Texas. Currently, he is the James W. Bagley Chair Professor and Department Head of Electrical & Computer Engineering at the Mississippi State University (MSU). Before arriving at MSU, he was Cluster Lead (2016-2020) for Computer Systems Research at National Science Foundation and the Walter B. Booth Professor at North Dakota State University. His research interests include optimization, robustness, and security of computer systems. His work has appeared in over 400 publications. He is associate editor of IEEE Transactions on Cloud Computing and Journal of Parallel and Distributed Computing.
Abstract
Bio
Dr Shoab Khan received his PhD from Georgia Institute of Technology USA in 1995. While in US he got extensive experience of working in several top notch technology companies like Scientific Atlanta, Picture Tel and Cisco System. In 1999 Dr Shoab Khan co-founded an exciting startup named Communication Enabling Technology (CET). The startup raised US $17 Million in venture funding in 2000. CET with Dr Khan as chief architect designed world highest density media processor chip for VoIP media gateways. For his innovate technology work, Dr Khan has 5 US patents to his credit. Dr Khan has contributed 330+ international publications and a world class textbook on Digital Design of Signal Processing System published by John Wiley & Sons and followed in many universities across the globe. He is an Adjunct Professor of Computer and Software Engineering at NUST College of EME. He is also a co-founder and Chancellor of CASE and CEO of CARE. CASE is a federally chartered primer engineering institution whereas CARE has risen to be one of the most profound high technology engineering organizations in Pakistan. The organization is catering for dire technical needs of defense and strategic organization by executing cutting edge technology. For his eminent industrial and academic profile, Dr Shoab has been awarded with numerous honors and awards. These include Tamgh-e-Imtiaz Pakistan, NUST best teacher award, HEC best researcher award and NCR National Excellence Award in Engineering Education. He is currently serving as a member of Prime Minister Task Forces on Technology Driven Knowledge Economy, Science and Technology and IT and Telecommunication, Member of National Computing Education and Accreditation Council (NCEAC) under HEC and has served as Chairman Pakistan Software Houses Association (P@SHA) for year 2014-15.
Abstract
Wireless Underwater Communication system can be used for divers or scubas to communicate with each other freely. A Digital underwater voice communication modem design and consideration will be given in this presentation. A prototype system of digital underwater acoustic voice communication will be also demonstrated during the talk.
The whole system consists of two main sections: Transmitter section and Receiver section. Both are made up of two main parts. At the transmitter section, the input speech signal is compressed by the Co-processor, after channel coding, and then compressed bit stream is sent to the receiver on an acoustic carrier wave using OFDM signal. In the receiver section, the speech decoder followed by demodulation. After synchronization and equalization, the compressed speech is fed to Co-processor to synthesize the speech signal.
Bio
Songzuo Liu received his B.S. and Ph.D. degree in signal and information processing from college of underwater acoustic engineering, Harbin Engineering University(HEU), China, in 2008 and 2014 respectively. In 2016, he started as a Postdoc researcher with Underwater Wireless Sensor Networking (UWSN) group in SENSE lab, Sapienza University of Rome. He is currently a professor in College of Underwater Acoustic Engineering, HEU. His research interests lie in the areas of underwater acoustic communication, design and implementation of underwater acoustic modem.
Abstract
Bio
Syed Ali Hassan (Senior Member, IEEE) received the M.S. degree in mathematics and the Ph.D. degree in electrical engineering from Georgia Institute of Technology, Atlanta, USA, and the M.S. degree in electrical engineering from the University of Stuttgart, Germany. His broader area of research is signal processing for communications. He was a Research Associate with Cisco Systems, Inc., San Jose, CA, USA. He is currently an Associate Professor with the School of Electrical Engineering and Computer Science (SEECS), NUST, where he is also the Director of the Information Processing and Transmission Research Group, which focuses on various aspects of theoretical communications. He has (co)authored more than 250 publications in international conferences and journals and has organized several special issues/sessions as editor/chair in leading journals/conferences. He is also the CTO of Adept Tech Solutions, a US-based start up having its R&D office in Pakistan, providing efficient solutions to engineering businesses
Abstract
Machine learning (ML) and “Systems” have a symbiotic relationship. On one hand, ML (including Deep Learning) success is in great part due to “Systems”, which has made a large amount of computing power available for ML workloads. On the other hand, there is an emerging field of ML for Systems (or MLSys) where ML is proving to be an effective alternative to solving systems problems ranging from compilers to databases. This talk will examine how ML can tackle resource management challenges in large data centers. It will highlight two separate (but related) problems that often become pernicious in a large and complex infrastructure such as Amazon AWS, Microsoft Azure or Google Cloud -- optimal scheduling of VMs/containers and performance anomaly detection in containers. This talk will showcase how ML can provide effective and adaptive solutions to both of these challenges through the use of deep reinforcement learning and unsupervised probabilistic models.
Bio
Tania Lorido-Botran, PhD, currently works at Microsoft as part of the AI and Advanced Architectures team. Prior to that, she was a postdoctoral researcher at the Pacific Northwest National Laboratory. Dr. Lorido-Botran successfully obtained her PhD in 2019 from University of Deusto in Spain with a Cum Laude distinction. Her thesis work focused on the optimal resource management of Cloud Data Centers through the application of optimization and ML techniques. During her PhD, she did a one-year internship at Rice University and two shorter ones at HP Labs and VMware. Her current research interests lie at the intersection of Machine Learning and Systems. Outside of work, she enjoys being outdoors, hiking and traveling.