Invited Speakers
Abstract
An ad hoc network is a collection of wireless nodes that self-configure (without help of any infrastructure) to form a network topology. Any node can potentially communicate to other nodes within its transmission range. Due to the inherent broadcast nature of the wireless channel (at the physical layer), transmissions amongst nodes have to be coordinated so as to avoid excessive interference to any ongoing receptions – at lease locally if not globally. Scheduling algorithms in ad hoc networks allow nodes to share the wireless channel so that concurrent transmissions can be decoded successfully. On one hand, scheduling needs to be efficient to maximize spatial reuse and minimize retransmissions due to collisions. But on the other hand, the scheduling algorithm needs to be easily implementable in a distributed fashion with little, if any, coordination with other nodes in the network. In the absence of multi-user detection techniques in ad hoc networks, interference management is done through MAC scheduling by creating suitable exclusion zones around active receivers. Applying MAC mechanisms alters the spatial distribution of parent contending nodes to a thinned daughter distribution. The knowledge of post-MAC geometrical distribution of nodes is important for efficient MAC implementation. The talk provides a generic model of for the thinning process, based on guard zones around active receivers, to capture the MAC implementation in ad hoc networks. Applying this model with stochastic Geometry techniques it is shown that the thinning process results in different spatial distributions that is sensitive to the robustness of the physical layer and affects both performance and implementation of mac protocols in ad hoc networks.
Bio
Dr. Hasan served in the engineering branch of Pakistan Air Force (PAF) as an Aeronautical Engineer (specialization in Avionics). He graduated with a bachelor’s degree from College of Aeronautical Engineering (CAE) at PAF Academy Risalpur in May’ 91 and an M.S. in Electrical and Computer Engineering from University of Southern California in Aug’ 02. His Ph.D. is in wireless communications from University of Texas at Austin under Professor Jeffrey G. Andrews. He has also completed Master’s in War studies and Strategic studies from PAF Air War College, Karachi. After a brief tenure as a lecturer at UT Austin, Dr. Hasan has been teaching graduate courses at Air University as a visiting faculty. His exposure to academia and industry allows students to undertake fundamental research and explore issues that are also important to Pakistan’s industry. He was awarded with Tamgha-i-Imtiaz (Military) for the design and implementation of PAF Next Generation Networks. Dr. Hasan played a key role in the establishment of an Aviation Design Institute (AvDI) at Pakistan Aeronautical Complex (PAC), Kamra for design and development of future aerial platforms. Dr. Hasan’s research is mainly focused on interference suppression techniques in wireless ad-hoc networks. His interest is in analyzing the impact of physical layer design, especially spread spectrum, on capacity of ad-hoc networks as well as its effect on MAC design. He has been using stochastic geometry tools to show that spread spectrum physical layer has a number of advantages over narrowband systems in wireless communication networks. Dr. Hasan remained associated with Habib University since the initial design phase of EE curriculum before finally joining University on 01 Jan’ 18 as an Associate Professor and Program Director (ECE)
Abstract
Regions around the world are facing rapid large-scale environmental changes brought about by climate change, demographic transitions, urbanization, and disruptive technologies. In South Asia, the impact of these changes is felt most in the water sector in poor management of irrigation networks, depletion of groundwater, deterioration in water quality, poor sanitation and difficulties in the preservation of ecosystems. Towards taming the hydrological complexity of river basins, the speaker's group has developed and deployed robotics and automation solutions for water management and precision agriculture in the world's largest contiguous irrigation network in Pakistan. These include real-time flow monitoring systems, innovative schemes for demand-based irrigation delivery and the use of unmanned aerial vehicles (UAV) to inspect the siltation of water channels. Recognizing simultaneously the strong coupling of human behavior with the natural systems, the speaker's group has developed game-theoretic socio-ecological models to investigate sustainability and environmentalism. In many instances, the effectiveness of these technologies has been demonstrated in scaling-up solutions to ensure transparency and effective governance
Bio
Dr. Abubakr Muhammad is an associate professor and chair of electrical engineering, the founding director of the Center for Water Informatics & Technology (WIT), and the lead for NCRA National Agricultural Robotics Lab at LUMS. He received his Ph.D. in Electrical Engineering in 2005 from Georgia Institute of Technology, USA winning an institute-wide best Ph.D. Dissertation Award. He received masters degrees in mathematics and electrical engineering from Georgia Tech and was a postdoctoral researcher at the University of Pennsylvania and McGill University. Since 2008, his research group at LUMS is doing applied research in robotics, automation and AI with applications to water, agriculture, and environmental issues. He serves on various advisory panels to government agencies and industry in Pakistan on water, climate and agricultural policy, especially on the use of emerging digital technologies for these sectors
Abstract
Two of the envisioned characteristics of future 6G networks are battery less operation, and using artificial intelligence to achieve optimal operation. Based on this vision, in this work we consider communications in IoT systems that harvests energy from ambient sources, and therefore does not require battery replenishing. Moreover, transmission decisions by IoT devices are generated by a reinforcement learning (RL) mechanism in order to maximize throughput. We present two RL approaches that mimic rational humans in the way of analyzing the available information and making decisions. The proposed algorithms are called selector-actor-critic (SAC) and tuner-actor-critic (TAC). They are obtained by modifying the well-known actor-critic (AC) algorithm. SAC consists of an actor, a critic, and a selector. The role of the selector is to determine the most promising action at the current state based on the last estimate from the critic. TAC is model based, and consists of a tuner, a model-learner, an actor, and a critic. After receiving the approximated value of the current state-action pair from the critic and the learned model from the model-learner, the tuner uses the Bellman equation to tune the value of the current state-action pair. This state-action pair is used by the actor to optimize the policy. The performance of the proposed algorithms are evaluated using numerical simulations and are compared to that of the AC algorithm to show the advantages of the proposed algorithms.
Bio
Ahmed E. Kamal is a professor and Director of Graduate Education in the Department of Electrical and Computer Engineering at Iowa State University in the USA. He received a B.Sc. (distinction with honors) and an M.Sc. both from Cairo University, Egypt, and an M.A.Sc. and a Ph.D. both from the University of Toronto, Canada, all in Electrical Engineering. He is a Fellow of the IEEE and a senior member of the Association of Computing Machinery. He was an IEEE Communications Society Distinguished Lecturer for 2013 and 2014. Kamal's research interests include cognitive radio networks, optical networks, wireless sensor networks, and performance evaluation. He received the 1993 IEE Hartree Premium for papers published in Computers and Control in IEE Proceedings, and the best paper awards of the IEEE Globecom Symposium on Ad Hoc and Sensors Networks Symposium in 2008 and 2018. He also received the 2016 Outstanding Technical Achievement Award from the Optical Networks Technical Committee of the IEEE Communications Society. Kamal chaired or co-chaired Technical Program Committees of several IEEE sponsored conferences including the Optical Networks and Systems Symposia of the IEEE Globecom 2007 and 2010, the Cognitive Radio and Networks Symposia of the IEEE Globecom 2012 and 2014, and the Access Systems and Networks track of the IEEE International Conference on Communications 2016. He was also the chair of the IEEE Communications Society Technical Committee on Transmission, Access and Optical Systems (TAOS) for 2015 and 2016. He serves or served on the editorial boards of a number of journals including IEEE Communications, the IEEE Communications Surveys and Tutorials, the Elsevier Computer Networks journal, the Elsevier Optical Switching and Networking journal and the Arabian Journal of Science and Engineering.
Abstract
Worldwide, maintaining the health and safety of construction workers is a major problem. For example, in the UK construction industry alone, an ongoing unacceptable average of nearly 39 people are killed each year and countless others seriously injured. To help reduce the risks, various countries have adopted a range of methodologies and approaches to help foresee potential problems and put in place measures to mitigate against them. In the United Kingdom, Risk Assessment Method Statements (RAMS) are widely used as a means of helping to manage construction work and to help ensure that the necessary precautions have been communicated to those involved. RAMS are reviewed by various people in the chain-of-command of a construction project before being signed off. However, this review process is not without significant issues. One of these issues is the inconsistent understanding, review, development and dissemination of the RAMS. To overcome some of the problems associated with review of RAMS a tripartite partnership between Aurora International Consulting, IBM Watson, and the University of South Wales, are developing an Artificial Intelligence enabled RAMS review system that helps facilitate ‘textbook’ safety review every time. A fully functioning version of the prototype has the potential to revolutionize safety in the industry worldwide. Through the development of the prototype, it has been demonstrated that IBM Watson technologies provide a suitable tool kit to facilitate the analysis of the textual information held in RAMS. The talk will centre on explaining the problems faced and overcome in the development of the prototype Artificial Intelligence enabled RAMS review system and articulate the road map for future developments.
Bio
Andrew Ware is Professor of Computing at the University of South Wales in the United Kingdom. His research interest center on the use of intelligent computer systems to help solve real world problems. Andrew is currently working on AI related projects with a number of industrial and commercial partners that include Tata Steel, National Health Service Wales Informatics Service, Wye Education, and Aurora International Consulting. Professor Ware teaches various computer related courses including artificial intelligence, data mining and computer programming. Moreover, Andrew has successfully supervised more than thirty PhD students and has been an active participant in a number of international research and teaching projects. Andrew is a Regional Director of Techno Camps, an innovative and ambitious project that seeks to engage young people with computing and its cognate subjects.
Abstract
The ability to visualize a system outcome before it happens is extremely valuable and the proposed Virtual Reality (VR) system makes this much easier and accessible. Engineers will be able to design better products while customers can see final products pre-production, ultimately saving everyone time and money. This talk presents a distributed Virtual Reality (VR) system for data driven model creation, AI based adaptive model evolution and high performance model execution for large scale immersive design and analytics. Creating VR visualisations from real world distributed data and analytics offers a unique opportunity for next generation analytics in many engineering, scientific and medical applications. This work aims to provide a distributed VR system for development and deployment of VR models across the networks. This will enable programmers and users to specify and create mathematical and engineering models of engineering applications in VR space. Users will be able to analyse data in 3D environment by producing real time models and visualisations as the data is captured/analysed by distributed engineering teams working on different aspects of engineering models. Dynamically integrating data, algorithms & analytics to a VR environment needs piecing hundreds of thousands of objects together and will need careful understanding of geometrical and system models to minimise the computational burden. To overcome this problem, we use high performance in-memory systems where we can store the components of the models in a distributed shared memory while users are working on their models. This research project enables distributed users, with minimal effort or specialist knowledge, to view, create and edit data within VR and deliver applications for use in areas such as engineering design, visualisation and immersive training. This requires distributed engineering and design teams to build collaborative VR models, analyse and integrate their own datasets to the VR models using AI and big data analytics algorithms and intelligently evolve, update and visualise these VR models in real time as the data sources undergo changes. This distributed VR analytics system offers real time collaborative design and development of engineering models and is one of the first attempts to offer VR enabled data analytics and their Immersive Visualisation. Our proposed solution offers a massive technological challenge bringing together several highly technical skills areas such as VR, distributed systems, AI and data science to deliver a virtual space where users can meet and work collaboratively.
Bio
Ashiq Anjum is a professor of distributed systems and director of the data science research centre at the University of Derby, UK. His areas of research include data intensive distributed systems and high performance analytics platforms for continuous processing of streaming data. Prof Anjum has been part of the EC funded projects in distributed systems and large scale analytics such as Health-e-Child (IP, FP6), neuGrid (STREP, FP7) and TRANSFORM (IP, FP7) where he investigated resource management and optimization issues of large scale distributed systems and provided platforms for high performance data analytics. He has been investigating large-scale distributed systems and analytics platforms for the LHC data in collaboration with CERN Geneva Switzerland for the last fifteen years. Before starting an academic career, I worked for various software multinational companies for around ten years. He secured grants from industrial partners, Innovate UK, RCUK and other funding agencies for investigating high performance video analytics systems for producing intelligence and evidence for medical, security, object tracking and forensic science applications. He is also closely working with healthcare providers, hospitals and pharma companies in investigating high performance analytics systems for distributed clinical intelligence and integration, iterative genome analytics and precision medicine. He has been actively working in collaboration with rail companies to investigate how rail infrastructures and services can benefit from Internet of Things (IoT) and real time analytics by intelligently analyzing streams of data arriving from rail networks to increase accuracy, reliability and capacity of rail infrastructures and services. In addition, he has also been investigating ways to model the rail networks as a distributed Graph System and provide adaptive scheduling and resource management systems. Thanks to a large grant from Innovate UK, he has been working with a leading VR provider to enable real time visualization of 3D engineering models and distributed algorithms in a Virtual Reality environment. This work allows distributed parties involved in large scale collaborative engineering projects to identify potential conflicts or required changes at the design stage, rather than during manufacturing, when they’re extremely costly to put right.
Abstract
Calculus is an important subject and field of study which is needed to understand and analyse situations in economics, health, medicine, psychology, sociology, engineering, animated multimedia design and so many other areas still unknown to mankind. There is serious problem in understating and appreciating the power of this subject and various reforms were being made in the US, India, and Israel to offset this limitation which creates fear in the minds of the learner and instead of looking at our short comings, we, the instructors of calculus, do a great damage by creating this widely spread notion: “I can not understand calculus because I am not a math guy”. Recent research in teaching calculus has shown that every child can study calculus and apply its tools to solve every day problems. The big question is how to make sure that every child “should” study calculus in spite of all the limitations of shortage of instructors, absence of infrastructure, and last but not the least the reluctance of the part of the school, college or university hierarchy to accept that the problem is there. In Israel, civil emergency was imposed not because of any law and order problem but because they think that they are far behind in teaching and understanding calculus. Rightly so they think that the survival of Israel depends on this very subject. Please see the box and it is evident how calculus is important for the survival of Israel. So the big question that we should explore in this paper is how to make sure that every child should study calculus, and should understand calculus. But in order to address that problem, we first need to understand what is wrong in teaching and learning calculus. In other words where the problem lies. Unless we pinpoint the disease and reason of that disease we cannot move an inch to solve this grand problem in Pakistan. Again in order to find the reason behind this malaise, we first need to know what calculus is all about and what is so special in calculus that it becomes extremely difficult to teach, understand and apply it in real world problems. So this paper first addresses the pedagogical problem hidden in the very fabric of calculus by first analysing what calculus is? Only then we shall think of solving the grand problem: How to make sure that each child in Pakistan should study calculus
Bio
M. Ashraf Iqbal is currently serving the Lahore Garrison University, Lahore. He was as a Professor and Dean of the Faculty of Information Technology at the University of Central Punjab, Lahore. He was a Fulbright research scholar at the University of Southern California from 1992-93. He has also worked as a DAAD research fellow at the Stuttgart Institute of Parallel and Distributed Computing and as a Research Assistant in ICASE, NASA Langley Research Centre, Hampton, Virginia. He started his career teaching electrical engineering at UET where he worked for almost 30 years, before moving to LUMS where, as the Head of the Computer Science Department, he started the doctoral programme with a focus on theoretical computer science and advanced applied research projects. He is well-known in the computer science community for his lectures on graph theory at the Virtual University. As his interest in the area of design for learning developed, he moved to NUST where he set up the MS programme in Innovation, Technology, Education, the first of its kind in Pakistan. Dr. Iqbal has also served as the Director of Namal College, Mianwali and the Rector of Ali Institute of Education. Presently, his work is focused on understanding how technology can be used to overcome learning problems with the mission “quality education for anyone, anywhere, anytime”. He is also the author of a book on Graph Theory and Algorithms (Google Books). Currently, he is writing a book on education, pedagogy and technology - exploring how the three blend to make a dramatic change. He is also a poet and a short story writer.
Abstract
Deep Convolutional Neural Networks (CNNs) are a special type of Neural Networks, which have shown state-of-the-art performance on various competitive benchmarks. The powerful learning ability of deep CNN is largely due to the use of multiple feature extraction stages (hidden layers) that can automatically learn representations from the data. Availability of a large amount of data and improvements in the hardware processing units has accelerated the research in CNNs, and recently very interesting deep CNN architectures are reported. The recent race in developing deep CNNs shows that the innovative architectural ideas, as well as parameter optimization, can improve CNN performance. In this regard, different ideas in the CNN design have been explored such as the use of different activation and loss functions, parameter optimization, regularization, and restructuring of the processing units. However, the major improvement in representational capacity of the deep CNN is achieved by the restructuring of the processing units. Especially, the idea of using a block as a structural unit instead of a layer is receiving substantial attention. This survey thus focuses on the intrinsic taxonomy present in the recently reported deep CNN architectures and consequently, classifies the recent innovations in CNN architectures into seven different categories. These seven categories are based on spatial exploitation, depth, multi-path, width, feature map exploitation, channel boosting, and attention
Bio
Dr. Asifullah Khan has more than 20 years of research experience and is working as Professor in PIEAS. . He has been awarded President’s Award for Pride of Performance for year 2018. In addition, he has received four HEC's Outstanding Research Awards and one Best University Teachers Award. He has also received PAS-COMSTECH Prize 2011 in Computer Science & I.T. He has received Research Productivity Awards from Pakistan Council for Science and Technology (PCST), in year 2012, 2013, 2014, 2015, and 2106. In the field of Machine Learning and Pattern Recognition, he has 105 International Journal, 53 Conference, and 09 Book-Chapter publications to his credit. He has successfully supervised 16 PhD scholars so far and is on the Panel of Reviewers of 48 ISI International Journals. Dr. Asifullah Khan has won 7 research grants as Principal Investigator. His research interests include: Machine Learning, Deep Neural Networks, Image Processing, and Pattern Recognition. He is Head of Department of Computer and Information Sciences at PIEAS, since 2016
Abstract
We have witnessed E-commerce has changed the world and the way we do business. It has opened unlimited opportunities and have made us possible live without boundaries. However we have seen since two decades technology is the key part of E-commerce business models. In current age, E-commerce has shifted from E-commerce 1.0 to E-commerce 3.0 model and from conventional marketing tactics to current AI based marketing. I will be sharing the thoughts and experience which are practicing currently for our E-commerce business.
Bio
Aurengzeb Khan is an experienced enterprise starter. He is one of the five most influential figures of Pakistani E-commerce. Good at starting from zero and reaching the horizons. He is ranked No. 1 in the list of “CEO of Pakistani start ups”. Former Adidas Asia-Pacific Director of supply chain, responsible for China, Thailand, Vietnam, Indonesia and Cambodia. He is the vice president of Pakistan-China chamber of commerce. He has 20-years-Chinese-working experience, fluent in 5 languages, including Chinese, English, Arabic, and Urdu.
Abstract
Urdu is the national language of Pakistan and is one of the prominent languages of the Indian subcontinent. It belongs to the family of Nabataean scripts and shares several attributes of other family members like Arabic and Persian. Urdu has posed major challenges to the OCR community due to the diagonal and seamless joining of individual letters to form ligatures. In this talk, I will present the efforts done on recognition of printed Urdu text using traditional computer vision and machine learning approaches over the last two decades. I will further demonstrate how long short-term memory based deep learning architectures have solved this long-standing problem, making it possible to create practical Urdu OCR systems.
Bio
Faisal Shafait is currently working as the Director of Deep Learning Laboratory at the National Center of Artificial Intelligence, Islamabad, Pakistan as well as a Professor at the School of Electrical Engineering and Computer Science, National University of Sciences and Technology (NUST), Islamabad, Pakistan. Previously, he was an Assistant Research Professor at The University of Western Australia in Perth, Australia; a Senior Researcher at the German Research Center for Artificial Intelligence (DFKI), Kaiserslautern, Germany; and a Visiting Researcher at Google Inc., Mountain View, California. He received his PhD with the highest distinction in computer engineering from TU Kaiserslautern, Germany in 2008. His research interests include machine learning and pattern recognition with a special emphasis on applications in document image analysis. He has co-authored over 150 publications in international peer-reviewed conferences and journals in this area. He is serving as the Founding President of Pakistan Pattern Recognition Society, which is IAPR’s official chapter in Pakistan. He has recently received IAPR Young Scientist Award, making him the only Pakistani and Muslim scientist to receive this prestigeous award given to the most outstanding young scientist worldwide in the field of pattern recognition and document analysis.
Abstract
Processing large-scale cloud data requires the just-in-need compute resources. This becomes more important in processing IoT data in a 5G network, where optimized AI models need to fit into various edge computing devices, with restricted/limited computing capability. One of the major challenges would be the scheduling overhead, which usually takes extended amount of time to achieve satisfactory scheduling decisions. Simulation-based Optimization and Ordinal Optimization were initially proposed in the control science community, that have been proven to be useful for optimizing such workloads on very large cloud. In this talk, I will be presenting a series of innovative work I’ve done along the Ordinal-Optimized (OO) line, and their applications in large-scale cloud data processing. Specifically, the multi-objective OO scheduling for optimizing conflicting computing objectives, the iterative OO (iOO) scheduling for fitting the optimization approach to the multi-phase, time-series workloads, and the evolutionary OO (eOO) for further extending the methodologies to the simulation-based learning territory. This line of research has brought significant impact to the computing of the Laser Interferometer Gravitational-wave Observatory (LIGO) project, where processing terabyte of data per day from hundreds of distributed sensors on thousands of servers, has achieved 10X-100X speedup. They are being used daily in the LIGO pipeline in MIT and Caltech data centers for real-time LIGO data processing.
Bio
Dr. Fan Zhang is currently a Research Scientist with the IBM Massachusetts lab. He was a postdoctoral associate with the Kavli Institute for Astrophysics and Space Research at Massachusetts Institute of Technology. He received his Ph.D. in Department of Control Science and Engineering, Tsinghua University in Jan. 2012. From 2011 to 2013 he was a research scientist at Cloud Computing Laboratory, Carnegie Mellon University. An IEEE Senior Member, he received an Honorarium Research Funding Award from the University of Chicago and Argonne National Laboratory (2013), a Meritorious Service Award (2013) from IEEE Transactions on Service Computing, two IBM Ph.D. Fellowship Awards (2010 and 2011). His research interests include big-data scientific computing applications, simulation-based optimization approaches, cloud computing, and novel programming models for streaming data applications on elastic cloud platforms.
Abstract
Recent advances in cloud computing have led to the advent of Business-to-Business Software as a Service (SaaS) solutions, opening new opportunities for EDA. High-Level Synthesis (HLS) in the cloud is likely to offer great opportunities to hardware design companies. However, these companies are still reluctant to make such a transition, due to the new risks of Behavioral Intellectual Property (BIP) theft that a cloud-based solution presents. In this paper, we introduce a key-based obfuscation approach to protect BIPs during cloud-based HLS. The source-to-source transformations we propose hide functionality and make normal behavior dependent on a series of input keys. In our process, the obfuscation is transient: once an obfuscated BIP is synthesized through HLS by a service provider in the cloud, the obfuscation code can only be removed at Register Transfer Level (RTL) by the design company that owns the correct obfuscation keys. Original functionality is thus restored and design overhead is kept at a minimum. Our method significantly increases the level of security of cloud-based HLS at low performance overhead. The average area overhead after obfuscation and subsequent de-obfuscation with tests performed on ASIC and FPGA is 0.39%, and over 95% of our tests had an area overhead under 5%.
Bio
Prof. Dr Guy Gogniat is a full Professor in Electrical & Computer Engineering (ECE) at the University of South Brittany (UBS), Lorient, France. He is currently the Vice-President research & innovation of UBS, France. He was a visiting Researcher at the University of Massachusetts, Amherst, USA, where he worked on embedded system security using reconfigurable technologies. His research activities span across multiple areas of embedded computing, including model based design methodologies, adaptive computing, reconfigurable architectures, rapid system prototyping, embedded system security and hardware/software co-design. His current research focuses on embedded system security. Over the past 20 years, he has supervised 40+ PhD and MS theses combined. To his credit, there are many research projects funded by French National and European funding agencies.
Abstract
Exceptional increase in the generation and availability of clinical and neuroimaging data sets has forced the advancements in data processing infrastructures and analysis applications. These massive amounts of heterogeneous data, which are accumulated both in real-time and over decades, are usually extremely critical for diagnostics and decision-making. In the medical domain, various e-infrastructures are offering a suite of services for neuroanalyses worldwide. Due to such developments, massive amount of data is being continuously and anonymously shared by the hospitals and research centers to constitute the foundations of brain disease analyses, such as Alzheimer. However, such increase in the volume, variety and velocity of neuroimaging datasets, and ever increasing knowledge complexity in medical research, puts neuroscientists under sever difficulties in data integration, data linking and in performing analyses. This talk will focus on the work needed to analyse neuroscience datasets for decision-making, the storage of these datasets and metadata along with several pre-computed parameters in a big data repository. Moreover, enabling such an end-to-end big data analyses mechanism, it requires building of various services that must perform all data ingest, prepare, transform and publish operations to support analyses. Furthermore, the patient identification using big data and Fuzzy Logic will be discussed, which can be achieved through fuzzy processing. It can enable patients’ sorting for a particular intensity of Alzheimer’s disease, short-term estimation of the progression of that disease and context of individual patients with respect to other patients such as appropriate treatment, estimated life expectancy etc.
Bio
Dr Kamran Munir is Associate Professor in Data Science, in the Department of Computer Science and Creative Technologies (CSCT) at the University of the West of England (UWE), United Kingdom (UK). Dr. Munir's funded research projects are in the areas of Data Science, Big Data and Analytics, Artificial Intelligence and Virtual Reality mainly funded by the European Commission (EC), Innovate UK and British Council. In the past, he has contributed to the various CERN (the European Organization for Nuclear Research) and EC funded projects e.g. CERN WISDOM, EC Health-e-Child and EC neuGRID4You (N4U) in which he led the Joint Research Area and the development of Data Atlas/Analysis Base, Big Data Integration and Information Services. Dr Munir has published a number of research articles; and he is a regular PC member and editor of various conferences and journals. Dr Munir's role also includes the leadership/production of Computer Science and Data/Information Science degree courses in collaboration with the industry such as Big Data, Data Science, Cloud Computing and Information Practitioner. He also enjoys frequent collaborations with graduates, including the conduct of collaborative work with the UK industry and has a number of successful MPhil & PhD theses supervisions.
Abstract
The use of intravascular iodinated contrast agents is very common for patients undergoing Percutaneous Coronary Intervention. Risks associated with the intravascular administration of iodinated contrast agents are already recognized in literature and practice. It is, therefore, essential to reduce doses of intravenous iodinated contrast media. Identifying a safe contrast volume dose based on a patient's risk profile is nevertheless a challenging task. This talk will highlight the challenges and will present this as an open problem that is still in search of a reasonably accurate solution by exploring new predictors and deep learning algorithms.
Bio
Dr. Khalid Latif loves building teams and products that leverage data to solve complex problems and reveal meaningful insight using Machine Learning and Linked Data. He has helped various organizations in the design and development of intelligent information systems such as clinical process optimization and healthcare analytics, social media trend analysis and news prediction, risk management for customs, fault identification in the construction sites, and automated job application filtering and trend prediction. Khalid holds a Ph.D. degree in Data Science from Vienna University of Technology and has written over 50 publications. His work in this area also includes the contribution in multiple research projects supported by the Austrian Science Fund, National ICT R&D Fund, HEC, WIPO, and the World Bank.
Abstract
Access-driven cache-based side-channel attacks, a sub-category of SCAs, are strong cryptanalysis techniques that break cryptographic algorithms by targeting their implementations. Despite valiant efforts, mitigation techniques against such attacks are not very effective. This is mainly because most mitigation techniques usually protect against any given specific vulnerability and do not take a system-wide approach. Moreover, these solutions either completely remove or greatly reduce the prevailing performance benefits in computing systems that are hard earned over many decades. In this talk, we argue in favor of enhancing security and privacy in modern computing architectures while retaining the performance benefits. We will discuss both hardware and software solutions for detection and subsequent mitigation of cache-based information leakage.
Bio
Dr. Khurram Bhatti is a Marie-Curie Research Fellow, working as an Assistant Professor at the Information Technology University, Lahore. His current research interests include embedded systems, information security at both hardware & software levels, Cryptanalysis, Mixed Criticality Systems and Parallel Computing Systems. Over the last 6 years, Khurram has taught at the University of Nice-Sophia Antipolis, France, and CIIT Lahore, Pakistan. He has been working with prestigious European research institutes like INRIA, Lab-STICC, KTH, École Polytechnique de Paris, and LEAT research laboratory. He holds a PhD in Computer Engineering and MS in Embedded Systems from the University of Nice-Sophia Antipolis, France. Khurram Bhatti is also the Director of Embedded Computing Laboratory at ITU, Lahore.
Abstract
Bio
Muhammad Maaz Rehan is a Senior Member of the IEEE and an Assistant Professor in COMSATS University Islamabad, Wah Campus, Pakistan. He is attached with academia and industry from the last 15 years and is leading the Telecom and Networks (TelNet) Research Group. He is the recipient of two Bronze medals from Universiti Teknologi PETRONAS (UTP) Malaysia, as part of his PhD work which concluded in 2016. He is an Editor of IEEE Softwarization Newsletter; and an Associate Editor of IEEE Access & Springer Human-centric Computing and Information Sciences journals. Maaz is twice a fellow of Internet Society (ISOC) for the Internet Engineering Task Force (IETF). He has more than 20 research articles and is the lead author of the book, “Blockchain-enabled Fog and Edge Computing: Concepts, Architectures and Applications", which is to be published by Taylor & Francis group, CRC Press, USA. His research areas include: Blockchain; Internet of Things and Vehicles; ICN; Machine Learning for Networking; and Fog/Edge computing
Abstract
Electric Vehicles (EVs) and Hybrid Electric Vehicles (HEVs) have attracted increasing attention and grown rapidly in recent years. Though there are various technologies viz. battery storage, materials etc. Which are also important but electrical propulsion system is the heart of EVs/HEVs and the electrical drives are the core components of the electrical propulsion systems. However, due to developments in power electronics, control technologies, microprocessors, signal processing and magnetic materials, new and highly efficient electric drive systems have emerged. These non-conventional drives are increasingly being employed for high- performance motion control applications in aerospace, EVs & HEVs and other servo and general industrial applications. The talk will review developments in permanent magnet materials, new developments in high- efficiency electric motors and drives and their performance and characteristics compared for future EVs/HEVs.
Bio
Mahmood Nagrial obtained his Ph.D. from the University of Leeds, UK. Dr Nagrial has extensive experience in Power Electronics and Drive Systems, Renewable Energy Systems. He has been Head, Electrical & Computer Engineering. He has also been Chair, School of Mechatronic, Computer & Electrical Engineering. He has also been responsible for initiating undergraduate, postgraduate courses and higher degree research programs in Electrical & Computer Engineering, Mechatronic Engineering. He is Group Leader of Research Group “Intelligent & Sustainable Electrical Systems”. Dr Nagrial has been a leading researcher in the area of permanent magnets and their applications; variable reluctance machines & drive systems. He has supervised Ph.D. and M.Eng. (Hons) Research Theses and postdoctoral fellows. He has conducted many short courses and published extensively in International Journals and International conferences. He has over 300 research publications. Before joining Western Sydney University to start and develop engineering programs, he worked as Principal Research Scientist at CSIRO in Sydney, where he was responsible for developing new devices using rare-earth magnets. Dr Nagrial is a Fellow of IET (UK), IE (Aust), Senior Member IEEE(USA), Fellow IEEE(Pak) and Fellow IE(Pak). Dr Nagrial has been on the organising committee of International Conferences, reviewed various papers for IEEE Conference and Transactions. He is presently Chair of IEEE International Conference on Electrical Engineering Research & Practice (ICEERP) 2019, Nov 24-28 Nov 2019, to be held in Sydney. Research Interests of Speaker Permanent Magnets and their applications, Electrical Drives, Variable Reluctance Drives, PM Machines & Drive Systems, Renewable Energy Systems, Wind Energy Conversion Systems, Power Electronics, EMC/EMI, Neuro-Fuzzy Control, Power Systems, Smart & Micro Grids, Intelligent Control, Micro-Electromechanical Systems
Abstract
Information security is fast becoming a first-class design constraint in almost all domains of computing. Modern cryptographic algorithms are used to protect information at the software level. These algorithms are theoretically sound and require enormous computing power to break with bruteforce. For instance. However, recent research has shown that cryptosystems, such as AES and others as well, can be compromised due to the vulnerabilities of underlying hardware on which they run. Side-Channel Attacks exploit such physical vulnerabilities by targeting the underlying platforms on which these cryptosystems execute. The SCAs can use a variety of physical parameters, e.g., power consumption, electromagnetic radiation, memory accesses and timing patterns to extract secret keys/information. The baseline idea here is that the SCAs can exploit the variations in these parameters during the execution of cryptosystems on a particular hardware and can determine the secret information used by cryptosystems based on the observed parameters. In this talk, we shall discuss the attack vector, its potential implications, detection mechanisms and mitigation approaches.
Bio
Maria Mushtaq is a scientific researcher at LIRMM-CNRS, University of Montpellier (UM), France. She holds a PhD in information security from Lab-STICC, University of South Brittany (UBS), France. Maria has specific expertise in developing runtime detection and mitigation solutions against side-channel information leakage in computing systems. Her research interests mainly focus on cryptanalysis, constructing and validating software security components, and constructing OS-based security primitives against various hardware vulnerabilities.
Abstract
The past couple of hundred years have been an interesting era for the human race which have been highlighted by periodic technological revolutions. Steam engine, iron and textiles spurred the first industrial revolution. The second industrial revolution centered around steel, petroleum, chemicals and electricity. We are living the third industrial revolution now, also known as the digital revolution, which is governed by bits and bytes, connected devices, communication, E-Commerce, AI/ML, IOT, robots, digital economies and smart manufacturing. Industries and business models are undergoing transformation and our lives are being changed at an unprecedented rate. In this talk, we will discuss different aspects of this digital revolution and how it has disrupted our lives. From Diginomics to spending patterns of consumers in the digital world, from social networking and how it impacts a single user’s life to how its manipulation can swing entire public opinions, from changing business models to changing national agendas, from the marvels of technology to its perils and pitfalls, from how old ways are giving way to the new – we will use stories, numbers and statistics to get a glimpse of the digital disruption that we see around us everyday and visualize what the future holds for us.
Bio
Kheam is Co-Founder & Director at Keystone Consulting, an IT start up specializing in the implementation of Oracle cloud applications. Prior to co-founding Keystone, Kheam was part of Oracle Pakistan for 7 years where he worked in various sales roles ranging from selling Oracle Technology products to On-premise Applications to Cloud Services. In his latest role, Kheam was responsible for the sales of Oracle’s Customer Experience Product Portfolio in Pakistan. Prior to Oracle, Kheam has worked with other multinationals like Cisco Systems and Nortel Networks and has also served large Pakistani Organizations like Pakistan Telecom Mobile (Ufone) and National Database Registration Authority. He has worked in a diverse set of business fields in these organizations and has gathered a wide area of knowledge and expertise in the domains of Services Sales, Project Management, Field Service Management and Networks. Kheam did his MBA degree from Lancaster University, UK as a Commonwealth Scholar and holds a Bachelor’s Degree in Computer Systems Engineering from the GIK Institute of Engineering, Science and Technology, Pakistan
Abstract
Access to massive amounts of data and high-end computers has heralded revolutionary advances in Machine Learning (ML) impacting domains ranging from autonomous driving and robotics, to healthcare, the natural sciences, the arts and beyond. As we deploy modern ML systems in safety- and health-care applications, however, it is important to ensure their security against adversarial attacks. Researchers have shown that many modern ML algorithms, especially the ones based on the deep neural networks (DNNs) are fragile and can be embarrassingly easy to fool. This is easier said than done. Recent research has shown that DNNs are susceptible to a range of attacks including adversarial input perturbations, backdoors, Trojans, and fault attacks. This can create catastrophic effects for various safety-critical applications like automotive, healthcare, etc. For instance, selfdriving cars and vehicular networks, which heavily rely on ML-based functions, exhibit a wide attack surface that can be exploited by well-known and yet-unknown-but-possible attacks on ML models. DNNs contain hundreds of millions of parameters and are hard to interpret/debug let alone verify, significantly increasing the chance they may misbehave. Further, any ML system is only as robust as the data on which we train it on. If the data distributions change in the field, this can impair performance (for example, an autonomous vehicle trained in day time conditions may not function at nighttime). The goal of this talk is to shed light on various security threats for the ML algorithms, especially the deep neural networks (DNNs). Various security attacks and defenses for DNNs will be presented in detail. Afterwards, open research problem and perspectives will be briefly discussed. Towards the end, this talk will also highlight the need for reliability in ML systems considering faults in the underlying hardware. Anecdotally, researchers speculated that ML applications forgive hardware errors. But, new research has revealed that the accuracy drops even at low fault rates. In fact, ML hardware in Tesla’s self-driving cars uses expensive dual modular redundancy to mitigate the impact of faults.
Bio
Muhammad Shafique is a full professor (Univ.Prof.) of Computer Architecture and Robust Energy-Efficient Technologies (CARE-Tech.) at the Institute of Computer Engineering, Faculty of Informatics, Vienna University of Technology (TU Wien) since Nov. 2016. He received his Ph.D. in Computer Science from Karlsruhe Institute of Technology (KIT), Germany in Jan.2011. Afterwards, he established and led a highly recognized research group for several years as well as conducted impactful research and development activities in Pakistan. Besides co-founding a technology startup in Pakistan, he was also an initiator and team lead of an ICT R&D project. He has also established strong research ties with multiple universities in Pakistan, where he is actively co-supervising various R&D activities, resulting in top-quality research outcome and scientific publications. Before, he was with Streaming Networks Pvt. Ltd. (Islamabad office) where he was involved in research and development of video coding systems several years. Dr. Shafique has demonstrated success in leading team-projects, meeting deadlines for demonstrations, motivating team members to peak performance levels, and completion of independent challenging tasks. His experience is corroborated by strong technical knowledge and an educational record (throughout Gold Medalist). He also possesses an in-depth understanding of various video coding standards (HEVC, H.264, MVC, MPEG-1/2/4). His research interests are in computer architecture, power- & energy-efficient systems, robust computing, dependable & fault-tolerant system design, hardware security, emerging Brain-Inspired Computing trends like Neuromorphic and Approximate Computing, Hardware and System-level Design for Machine Learning and AI, emerging technologies & nanosystems, FPGAs, MPSoCs, and embedded systems. His research has a special focus on cross-layer analysis, modeling, design, and optimization of computing and memory systems covering various layers of the hardware and software stacks. The researched technologies and tools are deployed in application use cases from Internet-of-Things (IoT), Cyber-Physical Systems (CPS), and ICT for Development (ICT4D) domains. Dr. Shafique has given several Keynotes, Invited Talks, and Tutorials at premier venues. He has also organized many special sessions at premier venues (like DAC, ICCAD, DATE, and ESWeek) and served as the Guest Editor for IEEE Design and Test Magazine (D&T) and IEEE Transactions on Sustainable Computing (T-SUSC). He is the TPC Chair of ISVLSI 2020. He has served as the TPC co-Chair of ESTIMedia and LPDC, General Chair of ESTIMedia, Track Chair at DATE and FDL, and PhD Forum Chair of ISVLSI 2019. He has served on the program committees of numerous prestigious IEEE/ACM conferences including ICCAD, ISCA, DATE, CASES, ASPDAC, and FPL. He is a senior member of the IEEE and IEEE Signal Processing Society (SPS), and a member of the ACM, SIGARCH, SIGDA, SIGBED, and HIPEAC. He holds one US patent and has (co-)authored 6 Books, 10+ Book Chapters, and over 200 papers in premier journals and conferences. Dr. Shafique received the prestigious 2015 ACM/SIGDA Outstanding New Faculty Award (given world-wide to one person per year) for demonstrating an outstanding potential as a lead researcher and/or educator in the field of electronic design automation. Dr. Shafique also received six gold medals in his educational career, and several best paper awards and nominations at prestigious conferences like CODES+ISSS, DATE, DAC and ICCAD, Best Master Thesis Award, DAC'14 Designer Track Best Poster Award, IEEE Transactions of Computer "Feature Paper of the Month" Awards, and Best Lecturer Award. His research work on aging optimization for GPUs featured as a Research Highlight in the Nature Electronics, Feb.2018 issue.
Abstract
Cyber-attacks and the proliferation of malware is rising at a horrific rate. Availability of the open-source malware codes and the supporting tools to generate the malware variants have made it easy to create the malware whose signature is not previously known. This ease of creating malware has enticed many attackers and thus has resulted in the proliferation of malware. As traditional signature and hashing algorithms have been proven inadequate to detect malware, researchers have tried to design and use fuzzy hashes to counter the problem of detecting malware variants. Fuzzy hashes work to determine the similarity index of files or the sections of the file and thus can be used to compare the malware variants with the existing malware or malware families. This talk discusses the challenges in the detection of malware variants and potential benefits and limitations of using fuzzy hashes for malware detection.
Bio
Dr. Muhammad Yousaf is working as Associate Professor in the Faculty of Computing, Riphah International University, Islamabad, Pakistan. He is also serving as Head of Department, Department of Cybersecurity and Data Science, Riphah International University, Islamabad. He is a Certified Information Systems Security Professional (CISSP). He did his Ph.D. in Computer Engineering in 2013 from the Center for Advanced Studies in Engineering, University of Engineering and Technology (UET), Taxila. His research interests include network security, network forensics, traffic analysis, mobility management, and IPv6. In Riphah, he is leading the Network Security Research Group, where he is supervising many national as well as international R&D projects in the area of network and cybersecurity.
Abstract
In today’s world, due to the rate of industrialization and increase in human activities, industries across the world cannot satisfy the enormous requirement of their customers and the energy industry is no exception. Energy business round the globe faces difficulty in large scale, and peer-to-peer energy trading, supply chain tracking, asset management, privacy and security among other information communication and technology that is currently revolutionizing the energy sector. Blockchain among other series of technological options is considered in this abstract to provide a long-term enabling environment that makes the energy system more robust, efficient and secure. Blockchain is an emerging paradigm in the smart grid, which provides a secure and decentralized environment for autonomous entities. It can support efficient operations of energy systems, reduces energy cost, improves the resilience and reliability of these systems. In addition, it may assist in solving several problems of optimization and energy reliability by providing visibility and control of real-time power injection and flow from distributed energy resources (DERs) at the substation level. Large penetration of DERs without the precise cybersecurity measures, which involves monitoring and trustworthy communication may jeopardize the energy system and cause outages and reliability problem for consumers. Attackers may alter DERs data send to the energy management system by exploiting the insecure communication channel to compromise the DERs’ control algorithm. In order to mitigate this type of attack, blockchain's ledger could be used to record the data transaction time. In addition, it authenticates and verifies transaction data to drop any command that contradict command contained in the smart contract. The objective of this talk is to examine the role of blockchain technology in the energy sector more specifically the smart grid with the selected fields; like, data science, energy trading, privacy and security, etc.
Bio
Nadeem Javaid received Masters degree in Electronics from the Quid-I-Azam University, Islamabad and Ph.D. degree from the University of Paris-Est, in 2010. He is an Associate Professor and the founding director of the ComSens (Communications over Sensors) Lab, Department of Computer Science, COMSATS University Islamabad, Islamabad Campus, Pakistan. He has supervised 16 Ph.D. and 116 Master’s theses. He has authored over 850 papers in technical journals and international conferences. He was awarded with the Best University Teacher Award 2016 by the Higher Education Commission (HEC) of Pakistan and Research Productivity Award from Pakistan Council for Science and Technology (PCST) for the year 2017. His research interests include: data analytics, smart grids, Blockchain, IoT, Wireless Sensor Networks, etc.
Abstract
The European General Data Protection Regulation (GDPR) came into effect in May 2018, and governs the storage and processing of personal user data that would allow an individual to be recognised. It also focuses on increasing awareness of how user data is subsequently analysed to derive insights (particuarly for marketing and profiling purposes), and the level of engagement a user should have in this process. Understanding how GDPR influences the processing of personal data also has a bearing on the use of Artificial Intelligence-based algorithms on such data. A key aim of this regulation is to increase accountability and transparency on how data "controllers" manage personal data. The major beneficiary is the individual, nevertheless GDPR is also applicable to businesses operating in a B2B context. This talk will describe how GDPR impacts data processors, particularly Cloud Service Providers who process personal data on behalf of data controllers. A comparison is provided about monitoring tools being used by current Cloud Service Providers to support GDPR -- such as AlientVault, Sumologic, Data Dog, AWS CoudTrail and Google Stackdriver. A data hosting environment that is able to record events on user data is proposed, enabling the recording of such events in a Blockchain for subsequent verification and auditing. We describe performance trade-offs in offering such a hosting environment for user applications. We also speculate on how GDPR applies to Internet of Things and emerging Edge computing environments.
Bio
Omer F. Rana is Professor of Performance Engineering at Cardiff University, with research interests in high performance distributed computing, data analysis/mining and multi-agent systems. He is also the Dean of International for the Physical Sciences and Engineering College at Cardiff University, responsible for establishing and supporting collaborative links between Cardiff University and other international institutions. He was formerly the deputy director of the Welsh eScience Centre and had the opportunity to interact with a number of computational scientists across Cardiff University and the UK. He is a fellow of Cardiff University's multi-disciplinary "Data Innovation" Research Institute. Rana has contributed to specification and standardisation activities via the Open Grid Forum and worked as a software developer with London-based Marshall Bio-Technology Limited prior to joining Cardiff University, where he developed specialist software to support biotech instrumentation. He contributed to public understanding of science, via the Wellcome Trust funded "Science Line", in collaboration with BBC and Channel 4. Rana holds a PhD in "Neural Computing and Parallel Architectures" from Imperial College (London Univ.), an MSc in Microelectronics (Univ. of Southampton) and a BEng in Information Systems Eng. from Imperial College (London Univ.). He serves on the editorial boards (as Associate Editor) of IEEE Transactions on Parallel and Distributed Systems, (formerly) IEEE Transactions on Cloud Computing, IEEE Cloud Computing magazine and ACM Transactions on Internet Technology. He is a founding-member and associate editor of ACM Transactions on Autonomous & Adaptive Systems.
Abstract
We enumerate some of the theoretical limits of Artificial Intelligence imposed by Information Theory, in particular when quantify the tremendous quantity of information (entropy) which would be needed to train AI beyond human capabilities. We show how the use of rather simple and fast algorithms can indeed accelerate the use of AI in tracking mankind thoughts on social media.
Bio
Philippe Jacquet graduated from Ecole Polytechnique, Paris, France in 1981, and from Ecole des Mines in 1984. He received his PhD degree from Paris Sud University in 1989. Since 1998, he has been a research director in Inria, a major public research lab in Computer Science in France. He has been a major contributor to the Internet OLSR protocol for mobile networks. His research interests involve information theory, probability theory, quantum telecommunication, protocol design, performance evaluation and optimization, and the analysis of algorithms. Since 2012 he is with Alcatel-Lucent Bell Labs as head of the department of mathematics of dynamic networks and information.
Abstract
Bio
I am serving as Chief Cyber Security at Khyber Pakhtunkhwa Cyber Emergency Response Center (KPCERC) which is unique initiative to complement the digital ecosystem by providing technological support & training in cyber security. KPCERC aims to capacity build government department by leveraging expertise and skills in domains of cyber security. KPCERC aspires to create cyber security awareness, assist & train government department employees & stakeholders and provide technology support, services & solutions. My role is designing and driving the execution of the cyber security related initiatives, complementing digital ecosystem and promoting e-governance solution in the KP. I have served as assistant professor in COMSATS University Islamabad and have been promoting innovation, product development and technology based startups. I had been mentoring technology based student ideas into products at CUI. Initiated first MS program in Cybersecurity at CUI and have been promoting Cybersecurity at various national and international forums. I completed his MS in Mobile & Radio communication and PhD in public safety communication systems from Lancaster University U.K. I have worked with Cyber Security Center at Lancaster University and & HW Communications. I had contributed in various ETSI, FP-7 and TSB-UK funded projects. At KPCERC I am involved in setting up provincial CERT; standardization of ICT based services, vulnerability assessment & Penetration Testing of digital assets of Government of Khyber Pakhtunkhwa.
Abstract
Over the last couple of decades, business models have enormously changed due to, at least in part, the digital transformation and the astronomical demand and supply of new applications and services. To catch up with the increasing pressure from the businesses, the newly envisioned services must be realized rapidly. Application Programming Interface (API) is a mechanism that has enabled the rapid realization, scalability, and sharing of services and value among different entities. In other words, APIs are the new business trend among the enterprises. APIs provide a direct access to the business-logic of the applications and data which is of paramount importance for the enterprise to deliver their services without any delay and share the data with partners. However, despite the exciting features of APIs and their undisputed important role in the enterprises, APIs lure cyber attackers and suffer from a number of attacks. This phenomenon makes them a double-edge sword where in addition to scaling the business of an enterprise, they introduce new attack vectors and new points of vulnerabilities. Recent researches have shown that cyber attackers are targeting APIs to attack enterprises because APIs are (possibly) the easy targets to launch attacks. Furthermore, the availability of computation and communication resources render other intelligent techniques (such as Artificial Intelligence) feasible for security in the cyber domain. The rationale for using Artificial Intelligence (AI)-based techniques and different breeds of AI in security, is their applicability and effectiveness in detecting and mitigating cyber-attacks. In the same spirit, AI, Machine Learning (ML), and Deep Learning (DL) have been used to protect APIs against misuse and different kinds of attacks. In this talk, the security requirements and the current state of API security will be discussed. From the security solutions standpoint, this talk will cover the current solutions for API security and their shortcomings that will lead us to discuss the role of AI, ML, and DL in API security. Furthermore, this talk will also touch upon the General Data Protection Regulation (GDPR) compliance of API security. Towards the end of the talk, we will identify the current trends and pressing issues in the API security that need immediate attention with respect to ML and DL
Bio
Dr. Rasheed Hussain received his B.S. Engineering degree in Computer Software Engineering from University of Engineering and Technology, Peshawar, Pakistan in 2007, MS and PhD degrees in Computer Science and Engineering from Hanyang University, South Korea in 2010 and 2015, respectively. He worked as a Postdoctoral Fellow at Hanyang University, South Korea from March 2015 to August 2015. He also worked as a guest researcher and consultant at University of Amsterdam (UvA), The Netherlands from September 2015 till May 2016 and as Assistant Professor at Innopolis University, Innopolis, Russia from June 2016 till December 2018. Currently he is an Associate Professor and head of the MS program in Security and Network Engineering (SNE) at Innopolis University, Innopolis, Russia. He is also the Director of Networks and Blockchain Lab at Innopolis University and serves as an ACM Distinguished Speaker. He is a senior member of IEEE and serves as editorial board member for various journals including IEEE Access, IEEE Internet Initiative, Internet Technology Letters, Wiley, and serves as reviewer for most of the IEEE transactions, Springer and Elsevier Journals. He also serves as technical program committee member of various conferences such as IEEE VTC, IEEE VNC, IEEE Globecom, IEEE ICCVE, IEEE ICC, and so on. He is a certified trainer for Instructional Skills Workshop (ISW) and a recipient of Netherland’s University Teaching Qualification (Basis Kwalificatie Onderwijs, BKO). His research interests include Information Security and Privacy and particularly security and privacy issues in Vehicular Ad Hoc NETworks (VANETs), vehicular clouds, and vehicular social networking, applied cryptography, Internet of Things, Content-Centric Networking (CCN), cloud computing, API security, and blockchain. Currently he is working on machine and deep learning for IoT security and API security.
Abstract
A country’s progress depends highly on the mobility of transportation it provides. Daily increasing demand in transportation is maximizing the chances of road crashes. Not only in underdeveloped countries but also in developed countries, road accidents have become an issue which should be resolved. Behavior of street users, Road conditions, and infrastructure of roads and increasing demands of motor cars are causing an increase in road accidents day after day. Our objectives include research for the identification of traits which are directly or indirectly linked to foundations of road accidents and also includes training of model using machine learning and artificial intelligence techniques which will be able to predict traffic accidents and high light causes of such accidents so that concerned authorities can optimize traffic routes, improve road designs and prioritize maintenances of road.
Bio
Dr. Saddaf Rubab is working as Assistant Professor at NUST Military College of Signals. She completed her doctoral studies at Universiti Teknologi PETRONAS, Malaysia in May, 2018. She got her MSc in Computer Software Computer Engineering from NUST College of Electrical & Mechanical Engineering (CEME), in 2012 and worked on different academia positions from 2009 to 2013. In the context of her current research, she is working on forecasts and implementing AI techniques in various interdisciplinary areas. Apart from this, her research interests include distributed computing, security and prediction systems.
Abstract
How to take an enterprise level turnkey project with HW and SW components from Concept to Completion. A complex project requires many stake holder where the importance and relevance of these stakeholders change across the life of the project. For successful execution of the project, from concept to completion, requires different set of skills. This necessitates a balance team to ensure success of the project. The team usually consists of a program director, project managers, task leads, QA leads, software system architect, HW system architect, front end and backend developers, UI/UX designers, QA team, deployment team, logistic management team, maintenance support and up gradations team, change management team etc are some of the roles. My talk shall list different stages in the life of an enterprise level IT project with HW and SW components, associate with each stage are the stakeholders and the skill set. I shall also give examples from the projects I have been part of and the strategy we adopted for their successful execution.
Bio
Dr Shoab Khan is of Prof Computer and Software Engineering NUST College of EME. He received his PhD from Georgia Institute of Technology USA in 1995. While in US he got extensive experience of working in several top notch technology companies like Scientific Atlanta, Picture Tel and Cisco System. In 1999 Dr Shoab Khan founded an exciting startup named Communication Enabling Technology (CET) from drawing room of his house in Pakistan. The startup raised US $17 Million in venture funding in 2000. CET had its head office in Irvine CA and development office in Islamabad. CET with Dr Khan as chief architect designed world highest density media processor chip for VoIP media gateways. For his innovate technology work, Dr Khan has 5 US patents to his credit Dr Khan has contributed 350+ international publications and a world class textbook on Digital Design of Signal Processing System published by John Wiley & Sons and followed in many universities across the globe. He is a professor of Computer Engineering at NUST College of EME. He has supervised / co-supervised 20+ PhD and 120+ MS theses to completion. He is also founding member (Chancellor / CEO) of CASE and CARE. CASE is a primer engineering and management school operating as a federally chartered degree granting institute, whereas CARE, under his leadership, has risen to be one of the most profound high technology engineering organizations in Pakistan. For his eminent industrial and academic profile, Dr Shoab has been awarded with numerous honors and awards. These include Tamgh-e-Imtiaz Pakistan, HEC best researcher award and NCR National Excellence Award in Engineering Education. He is currently serving as a member of Prime Minister Taskforces on IT and Telecommunication, Science & Technology and Knowledge Economy, Deputy Chairman of National Computing Education and Accreditation Council (NCEAC) and has served as Chairman Pakistan Software Houses Association (P@SHA) for year 2014-15.
Abstract
Since women in tech and their empowerment is a hot topic these days and very close to my heart, i'd like to speak about 'Bridging the Gender Gap: Pakistani Women in Tech Space'. It will be a discussion on women in tech space till date, gaps and major hurdles in the industry, opportunities for women to excel at local and global as well as some mentions of Govt and Private sector lead initiatives. Apart from this i would like to discuss a few case studies who made their ways to employers like Facebook, Google, Amazon, Microsoft etc as well as the female only and lead team who are doing wonders in the domain of education, transportation, health and Agri etc.
Bio
Sidra Jalil is a Community Builder, a tech graduate with strong expertise in the domain of Marketing, Research and Communication. She has been working in the industry primarily with Technology and Social Sector for over 13 years in a diverse domain and last 8 years particularly with Entrepreneurial Ecosystem in Pakistan both as an Entrepreneur and Intrapreneur. She has worked with Code for Pakistan and volunteering for OPEN Islamabad and Internet Society, Islamabad Chapter as their Vice President where she organizes workshops, Special Interest Group around startup problems, networking events, motivational talks and forums to address startup issues. She is also the first and only female Ambassador of AngelHack in Pakistan. She is a blogger, internet marketer, motivational speaker and a socially active person with keen interest in community work.
Abstract
We are in an age where we are crunching data more than before. Data Science is an art of transforming into value. The amount of data that is being generated is at an enormous pace and at an outstanding level. This data is of little value if it is not mined, refined and harvested. Here comes the role of Data Science, which extract, model the data to inform decision making in a proactive and systematic fashion that can be generalized and becomes profitable for industry. This process involves utilization of Artificial Intelligence, Machine Learning and Statistical Methods. More and more advancements are rapidly being made in this technical, revolutionary world, and ultimately, this has led us to give an effort to paint a picture of a data science landscape in 2019. With the advancement in technology along with the 4 th Industrial revolution, everything can be achieved by just sitting in the comfort of your bedrooms.
Bio
Currently, Dr. Sohail is working as Professor & Chairman of Computer Science at COMSATS University Islamabad. In 2011 he joined as a Director of the University Institute of Information Technology, PMAS-Arid Agriculture University, Rawalpindi. Previously, in 1994, he graduated with honors in Computer Science from the University of Wales, United Kingdom. He then received his PhD from Faculty of Information Technology at Monash University, Melbourne Australia in 2006. Professor Sohail has taught and researched in Data Mining (including structural Learning, Classification, and Privacy Preservation in Data Ming, Text and Web Mining), Big Data Analytics, Data Science and Information Technology areas, and he has published extensively (More than 150 publications) in international journals as well as conference proceedings. Professor Sohail is in the Editorial Team of well reputed Scientific Journals. He has also served as Program Committee member of numerous International Conferences and regularly speaks at international conferences, seminars and workshops.
Abstract
Technology Risk Management (Predominantly Project Risk Management) disciplines are often linked to a few simple frameworks that can be easily understood and applied, not only by managers but also by the majority of individuals. In contrast, project risk management methods have tended to be too complex to be easily understood and applied by non-experts. Modern project risk management methods were developed primarily in 1980s by expert practitioners (at the beginning mostly engineers) for practitioners (also primarily engineers). The pivotal assumption of the project management methods has been that documenting every aspect of a project in detail will provide a high level of control of the planned activities during the implementation of the project. Many project managers ended up producing massive numbers of documents and swathes of paperwork, leading to an overall feeling that the role was primarily administrative. Scientific and Technological risk management advances are at a point where challenges to our education, health, governance, environment and wellbeing may be defined and potentially addressed in a way that could not have been imagined in a mere decade ago. Information and Communication Technologies (ICT) have transformed every aspect of our lives and offered unprecedented opportunities and challenges for education, health and government. Implementation architecture of change management and technology risk management best practices continues to be offered as strategic drivers towards improved efficient governance models, optimizing cost, better control and productivity for complex resource mapping projects. In this talk, I will present case studies of public sector and a regional case study on cross functional team-based technology risk management working. I analyse the success and failures in multi organisational mechanisms that supported their implementation and identify the requirements for successful deployment. The said talk uncovers a critical requirement – the need not to break the status-quo but also manage and overcome resistance to change by formalizing the systematic Technology risk Management process for complex resource mapping.
Bio
Technology integration and Risk Management Consultant at Government of Pakistan, IDB (Islamic Development Bank) and Higher Education Commission Of Pakistan, Chief Technology Officer (CTO) at Worldwide Technologies & Consulting Pty Australia. Having 20+ years of experience as an IT Professional, primarily in the field of Technology Risk Management, Project management, Systems Integration, Business Intelligence, Applications Services Framework consultancy in Public sector, Oil & Gas, Public Health and telecom sector. Heading the Service Delivery group of a multi-disciplinary consulting practice that delivers solutions using a wide variety of IT Automation technologies and solutions, management and orchestration platforms of tier 3 datacentres and cloud solutions. Expert in project planning, execution and monitoring & control. Encompasses strong leadership, successful team-building capabilities combined with technical, and communication skills. Diverse technical expertise derived from rapid learning and effective application of cutting-edge technology. Facilitate problem-solving teams that accurately assess technical challenges and successfully transform ideas into appropriate, workable solutions. As a hard core portfolio and program manager, managed a number of international and national level projects of 1000+ Man Months at large platform of tier 3 on SaaS and PaaS mode. As a Principal Applications Framework Consultant primarily responsible for integration and design of Applications services Framework across large infrastructures. Writer, speaker, researcher and consultant in the fields of Technology Management & Middleware Risk Management and evaluation. Today, In addition to all I serves as an adviser, policy maker and Board Member to several National (Public sector) and International organizations.
Abstract
Bio
Xavier Fernando
Ryerson University, Canada
Abstract
5G and beyond wireless networks envision number of V2X (X stand for Vehicle, Infrastructure or Network) communication scenarios to enable autonomous vehicles and intelligent transport systems. Visible Light Communications (VLC) is gaining momentum for V2X communication with the availability of abundant bandwidth and inherent short-range confinement. Wide deployment of solid-state lights and, image sensors in vehicles enable simultaneous lighting, sensing and communication possibilities. However, compared to indoor, outdoor vehicular VLC systems are exposed to rapidly varying channel conditions with high ambient noise that needs advanced solutions. Highly directional property of light rays is another issue. Popular radio-based solutions such as OFDM can’t be directly applied in the optical domain. The talk will highlight benefits and challenges of both VLC Fi-Wi and systems.
Bio
Xavier Fernando (http://www.ee.ryerson.ca/~fernando) is a Professor and Director of Ryerson Communications Lab. His research focus is on signal processing for wireless communications. He has a special interest in photonics for wireless access. He was an IEEE Distinguished Lecturer and delivered over 50 invited lectures worldwide. He has (co-)authored over 150 research articles; holds three patents and mono graphed a widely selling book on Radio over Fiber. He has won 15 prizes and awards so far internationally for his research so far including the first and second prizes in Opto-Canada, bronze prize in IEEE MTT Society international design competition and first prize in IEEE CCECE. He has played key roles in many reputed conferences and edited journal special issues. He was a member of Ryerson Board of Governors and a program evaluator for ABET. He was a finalist for the Top 25 Immigrant Award of Canada in 2012. He was the Chair of IEEE Canada Central Area, Member of IEEE Canada Board and the Chair of IEEE Canadian Conference on Electrical Engineering. Currently his lab holds over $1 Million in Research Funding.
Abstract
Artificial Intelligence is one of the most revolutionizing emerging technologies of the modern era and is widely expected to spearhead the upcoming technological revolution. The most key feature of AI is its potential to be integrated in all sorts of technical, socio-technical as well as social systems ranging from area specific domains such as Robotics, IoT, Industry 4.0, Surveillance, Smart city, Crime prevention, Agriculture and Healthcare to traditionally non-technological domains such as finance and judiciary etc. According to Forbes, AI is forecasted to add $15.7 trillion to global GDP by 2030 with inevitable integration into every aspect of human life, much like the revolution of electricity during the 20th Century. In Pakistan, the AI frontier is led by National Center of Artificial Intelligence (NCAI) with labs in major universities selected by HEC on competitive grounds. This talk will focus on the AI research activities being conducted in Pakistan by NCAI.
Bio
Dr Yasar Ayaz holds a PhD specializing in Robotics & Machine Intelligence from Tohoku University, Japan and is currently the Chairman / Central Project Director of National Center of Artificial Intelligence (NCAI) of Pakistan which is Government of Pakistan’s newest leading technology initiative with funding of more than US $ 10 Million. Dr Yasar is also the Head of Department of Robotics and Artificial Intelligence at the School of Mechanical and Manufacturing Engineering (SMME) of National University of Sciences and Technology (NUST), Pakistan. He has also been awarded with honorary title of Specially-Appointed Associate Professor by Tohoku University, Japan. He is the Founder and President of IEEE Robotics and Automation Society of Pakistan and Deputy Chairman of the National Technology Foresight Panel on Robotics at the Pakistan Council for Science and Technology (PCST). Dr Yasar is the author of more than 80 international publications and reviews a number of major journals in the areas of Robotics, Artificial Intelligence and Bioengineering. His research has been cited by Top Universities of more than 13 countries including South Korea, USA, Japan, France, Germany, Hungary, Canada, China, Iran, Croatia, Singapore etc and he has claimed a number of national and international awards for his research. Very recently Dr Yasar’s Research paper on Understanding Human Emotions from Hand Gestures was recently declared the Overall Best Paper at the International Conference on Human Computer Interaction 2018 in London UK. His work on Robotic Grasping using Soft Fingers has also been accorded Third Best Technical Paper Award at International Conference on Climbing and Walking Robots: CLAWAR, Sydney, Australia. In May 2014 Dr Yasar was conferred with President’s Gold Medal by President of Pakistan for being the Overall Best Teacher of NUST which is the highest performance award of NUST that is given on the basis of overall teaching performance. As per Pakistan Book of Records, Dr Yasar also holds 2 national records in Pakistan in the field of engineering for which he has also been awarded PBR Gold Medals by Pakistan Book of Records. His first National Record is of supervising project with maximum number of International Research Papers from an Undergraduate Final Degree Project which is 10 International Research Papers during undergraduate degree. His second National Record is of supervising undergraduate degree project with maximum number of Thomson Reuters Impact Factor Journal Papers which is 3 Thomson Reuters Impact Factor Journal Papers from an Undergraduate Final Degree Project. Dr Yasar has been featured in Who's Who in the World, Berkeley, USA as a leading International Researcher and Academician since 2013 to present. He has also been named among the top 100 Educators of the World by the International Biographical Center of Cambridge, UK. In March 2017 the National Academy of Young Scientists (NAYS) Pakistan has also named Dr Yasar Ayaz as one of the Top 20 Most Eminent Scientists of Pakistan. During the process of selection of leading scientists for National Center of Artificial Intelligence (NCAI), Pakistan, Dr Yasar Ayaz has been selected (on competitive grounds) out of all Pakistani AI scientists to take the top position of Chairman / Central Project Director of the National Center of Artificial Intelligence (NCAI) of Pakistan
Abstract
In today's internet, popular content and application services (such as Facebook, Gmail, major News and Wiki sites, Google/Bing search) are provided to the consumers by online providers. These seemingly free services and content are, in turn, supported by online ads. To make such ads relevant to the consumer and, thus, of any value to the advertiser, all popular high-traffic websites collect a host of information about the consumers, often without their consent and without their knowledge. Thus, the price a consumer pays for enjoying these free online services is in terms of a compromise on their privacy -- the sites they are visiting, the videos they are playing, the people they are interacting with, and a whole bunch of other private information that one may not wish to disclose. This talk will highlight the types of privacy breaches with alarming examples, local and global, from the recent past. We also provide an overview of internet ad system and web tracking and their role in collecting and using private consumer data
Bio
Zartash is an Associate Professor of Electrical Engineering and Computer Science in the Syed Babar Ali School of Science and Engineering at LUMS, Lahore, Pakistan. He received his B.Sc. in Electrical Engineering from UET, Taxila and M.S. and Ph.D. from Stanford University. Previously, he has held positions at Nokia research center, Bell Laboratories, and Max Planck Institute for Software Systems. His recent research interests include Efficient Scheduling of Flows in Data Center Networks, Next-generation Cellular Packet Core, Measuring Internet Censorship, and Preserving User Data and Privacy on the Internet.