1. Nephele: Efficient Parallel Data Processing in the Cloud
In recent years Cloud Computing has emerged as a promising new approach for adhoc parallel data processing. Major cloud computing companies have started to integrate frameworks for parallel data processing in their product portfolio, making it easy for customers to access these services and to deploy their programs. However, the processing frameworks which are currently used stem from the field of cluster computing and disregard the particular nature of a cloud. As a result, the allocated compute resources may be inadequate for big parts of the submitted job and unnecessarily increase processing time and cost. In this proposed system we discuss the opportunities and challenges for efficient parallel data processing in clouds and present the project Nephele. Nephele is the first data processing framework to explicitly exploit the dynamic resource allocation offered by todayâ€™s compute clouds for both, task scheduling and execution. It allows assigning the particular tasks of a processing job to different types of virtual machines and takes care of their instantiation and termination during the job execution. Based on this new framework, we perform evaluations on a compute cloud system and compare the results to the existing data processing framework Hadoop.
2. Agent based Share Trading System
Here in the proposed system we have tried to create a system that will help the people in the stock market for trading. The agents will be acting on behalf of the people in between and taking decisions for their users based on certain rules that will helpful for doing the trading with much less risk and high efficiency. For example we can create agents to look for the shares with least value for buying and highest for the bidding. The trading will be only possible if the decision will be acceptable by the two parties taking part in the deal.
3. Face Recognition Using Neural Networks
The proposed system is used for identify a person using the face print match. With the help of neural network first we train the system for a captured face print image and then using this image we identify the person later. Here we use a two layer neural network for the purpose of identifying the face.
Here in the first layer of the neural network we identify the face of a person. That is we train the network to identify if it is the face of a person that is actually viewing and not some other part of the body. In the second layer we train the network to identify the features of the face and thereby distinguish a particular person.
In the proposed system we make use of the computational capabilities of Hadoop for distributed computing. It enables applications to work with thousands of nodes and petabytes of data. Hadoop is a top-level Apache project being built and used by a global community of contributors, using Java programming language. Â It achieves reliability by replicatingÂ the data across multiple hosts. In the proposed system we are trying to implement Hadoop in a distributed network and thereby making use of the resources of different machines and thereby achieve efficiency in the work that is done. The main advantage in the Hadoop distributed networking is that the work can be done more effectively compared to a single computer in the network and thereby raise the productivity exponentially.
5. A Search Engine For Finding Highly Releavent Applications
A fundamental problem of finding applications that are highly relevant to development tasks is the mismatch between the high-level intent reflected in the descriptions of these tasks and low-level implementation details of applications. To reduce this mismatch we have come up with an approach called Exemplar (EXEcutable exaMPLes ARchive) for finding highly relevant software projects from large archives of applications. After a programmer enters a natural language query that contains high-levelconcepts, Exemplar uses information retrieval and program analysis techniques to retrieve applications that implement these concepts.
6. Seattle: A Platform for Educational Cloud Computing
Cloud computing is rapidly increasing in popularity. Companies such as Red Hat, Microsoft, Amazon, Google, and IBM are increasingly funding cloud computing infrastructure and research, making it important for students to gain the necessary skills to work with cloud-based resources. This proposed system presents a free, educational research platform called Seattle that is community-driven, a common denominator for diverse platform types, and is broadly deployed. Seattle is community-driven universities donate available compute resources on multi-user machines to the platform. These donations can come from systems with a wide variety of operating systems and architectures, removing the need for a dedicated infrastructure. Seattle is also surprisingly flexible and supports a variety of pedagogical uses because as a platform it represents a common denominator for cloud computing, grid computing, peer-to-peer networking, distributed systems, and networking. Seattle programs are portable. Studentsâ€™ code can run across different operating systems and architectures without change, while the Seattle programming language is expressive enough for experimentation at a fine-grained level.
7. Collage Steganography
Establishing hidden communication is an important subject of discussion that has gained increasing importance nowadays with the development of the Internet. In this project one such method is shown with high efficiency. One of the methods introduced for establishing hidden communication is steganography. Methods of steganography have been mostly applied on images while the major characteristic of these methods is the change in the structure and features of the images so as not to be identifiable by human users like hackers, . Considering that in this method information has been hidden in the appearance of the picture, then by using the present methods of identification of stegano images one cannot identify the steganography images in this method or extract data from them. The proposed method has been implemented by using the Java programming language.
While implementing this method, the main purpose is to hide data in a cover media so that other persons will not notice that such data is there. This is a major distinction of this method with the other methods of hidden exchange. The capacity of steganography in spatial domain method depends on the apparent features of the image and the variety of colors used in it while, in temporal method, capacity relates to the number of pixels and the number of bits allocated to each pixel for display of color.
8. Data Protection Techniques in Modern Computer Networks
Here we have a computer network security based on Public Key Infrastructure (PKI) systems. We consider possible vulnerabilities of the TCP/IP computer networks and possible techniques to eliminate them. We signify that only a general and multi layered security infrastructure could cope with possible attacks to the computer network systems. We evaluate security mechanisms on application, transport and network layers of ISO/OSI reference model and give examples of the today most popular security protocols applied in each of the mentioned layers. We recommend secure computer network systems that consist of combined security mechanisms on three different ISO/OSI reference model layers: application layer security based on strong user authentication, digital signature, confidentiality protection, digital certificates and hardware tokens, transport layer security based on establishment of a cryptographic tunnel between network nodes and strong node authentication procedure and network IP layer security providing bulk security mechanisms on network level between network nodes. User strong authentication procedures based on digital certificates and PKI systems are especially emphasized.
Here we also evaluate and signify differences between software-only, hardware-only and combined software and hardware security systems. Therefore, ubiquitous smart cards and hardware security modules are considered. Hardware security modules (HSM) represent very important security aspect of the modern computer networks. Main purposes of the HSM are twofold: increasing the overall system security and accelerating cryptographic functions. In the proposed system we have eliminated all the overall possibilities of an attack. Without security measures and controls in place, your data might be subjected to an attack. Some attacks are passive in that information is only monitored. Other attacks are active and information is altered with intent to corrupt or destroy the data or the network itself. Your networks and data are vulnerable to any of the following types of attacks if you do not have a security plan in place.
9. Evaluation of Detection Algorithms for MAC Layer Misbehavior
Here we solve the problem of detecting greedy behaviour in the IEEE 802.11 MAC protocol by evaluating the performance of two previously proposed schemes: DOMINO and the Sequential Probability Ratio Test (SPRT).
Our evaluation is carried out in four steps. We first derive a new analytical formulation of the SPRT that considers access to the wireless medium in discrete time slots. Then, we introduce an analytical model for DOMINO. As a third step, we evaluate the theoretical performance of SPRT and DOMINO with newly introduced metrics that take into account the repeated nature of the tests. This theoretical comparison provides two major insights into the problem: it confirms the optimality of SPRT, and motivates us to define yet another test: a non-parametric CUSUM statistic that shares the same intuition as DOMINO but gives better performance. We analyze the paper with experimental results, confirming the correctness of our theoretical analysis and validating the introduction of the new non parametric CUSUM statistic.
10. Fragmental Proxy Caching For Streaming Multimedia Objects
Here, a fragmental proxy-caching scheme that efficiently manages the streaming multimedia data in proxy cache is proposed to improve the quality of streaming multimedia services.
2010 IEEE projects for Computer Science.pdf (Size: 215.38 KB / Downloads: 922)