Accepted Papers

  • Real Time Hand Gesture Recognition System By Using Line Of Features
    Mayyadah Ramiz Mahmood1 and Adnan Mohsin Abdulazeez2, 1Department of Computer Science, University of Zakho, Kurdistan, Iraq, 2Duhok Polytechnic University, Kurdistan, Iraq

    Hand gesture recognition stands as a big problem for computer vision. Sign language was an important and interesting application field of hand gesture recognition system. The recognition of human hand was a very complicated task. The solution of such a trouble requires a robust hand tracking technique which depends on an effective feature descriptor and classifier. This paper presents fast and simple hand gesture recognition based on fifty features extracted from one row. Feature descriptors have been used for hand shape representation to recognize the numbers from one to ten for Kurdish Sign Language (KurdSL). The features extracted in real time from pre-processed hand object were represented by optimizing the values of binary row. Finally an Artificial Neural Network (ANN) classifier is employed to recognize the performed gestures and translate them with their voices with access 98% into four languages. Finally, this work is compared to others.

  • Genetic Algorithm Analysis Using The Graph Coloring Method For Solving The University Timetable Problem
    MaramAssi,BahiaHalawi, and Ramzi A. Haraty, Departmentof Computer Science and Mathematics, Lebanese American University, Beirut, Lebanon

    The Timetable Problem is one of the complex problems faced in any university in the world. It is a highly-constrained combinatorial problem that seeks to find a possible scheduling for the university course offerings. There are many algorithms and approaches adopted to solve this problem, but one of the effective approaches to solve it is the use of meta-heuristics. Genetic algorithms were successfully useful to solve many optimization problems including the university Timetable Problem. In this paper, we analyze the Genetic Algorithm approach for graph coloring corresponding to the timetable problem. The GA method is implemented in java, and the improvement of the initial solution is exhibited by the results of the experiments based on the specified constraints and requirements.

  • A Survey Of The Knapscak Problem
    Maram Assi and Ramzi A. Haraty, Department of Computer Science and Mathematics, Lebanese American University,Beirut, Lebanon

    The Knapsack Problem (KP) is one of the most studied combinatorial problems.There are many variations of the problem along with many real life applications. KPseeks to select some of the available itemswith the maximal total weight in a way that does not exceed a given maximum limit L.Knapsack problems have been used to tackle real life problem belonging to a variety of fields including cryptography and applied mathematics. In this paper, we considerthe different instances of Knapsack Problem along with its applications and various approaches to solve the problem.

  • Protein subcellular and secreted localization prediction using deep learning
    Hamza Zidoum and Mennatollah Magdy, Sultan Qaboos University, Muscat, Oman

    Predicting the protein structure and discovering its function according to its location in the cell is crucial for understanding the cellular translocation process and has direct applications in drug discovery. Computational prediction of protein localization is alternative to the time consuming experimental counterpart approach. We use deep learning approach to enhance the prediction accuracy while reducing the time in predicting uncharacterized protein sequence localization site. Our approach is based on some general biological features of the protein sequence, compartment specific features to which we added the physco-chemical sequence features. We collected the protein sequences from UniProt1/ SWISS-PROT. Then, we collected the features for each protein. We only consider gram-positive bacteria and gram-negative bacteria. We consider five locations in the dataset, namely cytoplasm (CP), inner membrane (IM), outer membrane (OM), periplasm (PE) and secreted (SEC). We choose the protein sequences to be at least 100 aminoacid- length and a maximum length of 1000 amino acids. Each location contains 500 protein sequences. We propose a deep learning prediction method for bacteria taxonomy that combines a one-versus-one and one-versus all models along with feature selection using linear svm ranking, and deep auto-encoders to initialize the weights. The method achieves overall accuracy of 97.81% using 10-fold cross-validation on our data. Our approach outperforms the current state of the art computational methods in protein subcellular localization on the selected dataset.