Pages

Monday 12 November 2012

Java IEEE 2012 Projects List



Java IEEE 2012 Projects List:


  1. Cloud Data Production for Masses (Cloud Computing)
  2. Cooperative Provable Data Possession for Integrity Verification in Multi-Cloud Storage (Cloud & ParallelAnd Distributed)
  3. Ensuring Distributed Accountability for Data Sharing in the Cloud (Secure Computing)
  4. Game-Theoretic Pricing for Video Streaming in Mobile Networks (Image Processing)
  5. Learn to Personalized Image Search from the Photo Sharing Websites (Multimedia & ImageProcessing)
  6. On Optimizing Overlay Topologies for Search in Unstructured Peer-to-Peer Networks (Parallel AndDistributed)
  7. Online Modeling of Proactive Moderation System for Auction Fraud Detection (Network security)
  8. Packet-Hiding Methods for Preventing Selective Jamming Attacks (Secure Computing)
  9. Self Adaptive Contention Aware Routing Protocol for Intermittently Connected Mobile Networks (Parallel AndDistributed)
  10. Trust Modeling in Social Tagging of Multimedia Content (Image Processing)
  11. Efficient Fuzzy Type-Ahead Search in XML Data (Knowledge & DataEngineering)
  12. Fast Data Collection in Tree-Based Wireless Sensor Networks (Mobile Computing)
  13. Footprint Detecting Sybil Attacks in Urban Vehicular Networks (Parallel AndDistributed)
  14. Handwritten Chinese Text Recognition by Integrating Multiple Contexts (Pattern Analysis & Machine Intelligence)
  15. Multiparty Access Control For Online Social Networks: Model and Mechanisms (Knowledge & DataEngineering)
  16. Organizing User Search Histories (Knowledge & DataEngineering)
  17. Ranking Model Adaptation For Domain-Specific Search (Knowledge & DataEngineering)
  18. Risk Aware Mitigation (Secure Computing)
  19. Slicing: A New Approach to Privacy Preserving (Knowledge & DataEngineering)
  20. Secured Mobile Messaging




Secured Mobile Messaging


Abstract:

SMS messages are one of the popular ways of communication. Sending an SMS 

is cheap, fast and simple. When confidential information is exchanged using 

SMS, it is very difficult to protect the information from SMS security threats 

like man-in-middle attack, DOS attack as well as ensure that the message is 

sent by authorized senders. These papers describe solution that provides SMS 

security that guarantees provision of confidentiality, authentication, and 

integrity service. These provide hybrid compression encryption technique to 

secure the SMS data. The proposed techniques encrypt the SMS Elliptic curve 

encryption techniques and compress the encrypted SMS to reduce its length 

this document using lossless compression techniques.


One SMS message can contain at most 140 bytes (1120 bits) of data, so one 

SMS message can contain up to:

• 160 characters if 7-bit character encoding is used. (7- bit character encoding 

is suitable for encoding Latin characters like English alphabets.)

• 70 characters if 16-bit Unicode UCS2 (2-byte Universal Character Set) 

character encoding is used. (SMS text messages containing non-Latin 

characters like Chinese characters should use 16-bit character encoding.)

SMS text messaging supports languages internationally. It works fine with all 


languages supported by Unicode, including Arabic, Chinese, Japanese and 

Korean.


SMS Security: What is Needed?

Authentication: Confirm true identities between sender and receiver, and 

prevent impersonation attack from illegal intruders.

Confidentiality: Ensure that decrypted messages are accessible only to 

those authorized senders and receivers.

Integrity: Ensure that receivers can check out whether the message has 

been modified, and prevent tampered messages.



Comparisons performance:

Following are some measurements used to evaluate the performances of 

lossless algorithms.

Compression Ratio is the ratio between the size of the compressed file and 

the size of the source file.

Compression Ratio=size after compression/size before compression

Compression Factor is the inverse of the compression ratio. That is the ratio 

between the size of the source file and the size of the compressed file.

Compression factor=size before compression/size after compression

Saving Percentage calculates the shrinkage of the source file as a percentage.

Saving Percentage=size before compression-size after compression/ size 

before compression.

All the above methods evaluate the effectiveness of compression algorithms 

using file sizes.




CONCLUSION:

In this report we studied the techniques for securing SMS. The proposed 

technique combines the encryption and compression process. The proposed 

technique encrypts the SMS using Elliptic Curve algorithm. After this step the 

encrypted SMS compressed using a lossless algorithm i.e. Shannon Fano 

algorithm. The advantage of this technique is achieving the protection criteria 

such as confidentiality and authenticity between two communication parties 

and at the same time decreasing the message lengths.



Tuesday 30 October 2012

Slicing A New Approach to Privacy Preserving Data Publishing


Abstract:

Several anonymization techniques, such as generalization and bucketization, 

have been designed for privacy preserving microdata publishing. Recent 

work has shown that generalization loses considerable amount of information, 

especially for high-dimensional data. Bucketization, on the other hand, does 

not prevent membership disclosure and does not apply for data that do not 

have a clear separation between quasi-identifying attributes and sensitive 

attributes. In this paper, we present a novel technique called slicing, which 

partitions the data both horizontally and vertically. We show that slicing 

preserves better data utility than generalization and can be used for 

membership disclosure protection. Another important advantage of slicing is 

that it can handle high-dimensional data. We show how slicing can be used 

for attribute disclosure protection and develop an efficient algorithm for 

computing the sliced data that obey the ℓ-diversity requirement. Our workload 

experiments confirm that slicing preserves better utility than generalization 

and is more effective than bucketization in workloads involving the sensitive 

attribute. Our experiments also demonstrate that slicing can be used to 

prevent membership disclosure.

Algorithm Used:

Slicing Algorithms


 


Advantage of slicing is its ability to handle high-dimensional data. By 

partitioning attributes into columns, slicing reduces the dimensionality of the 

data. Each column of the table can be viewed as a sub-table with a lower 

dimensionality. Slicing is also different from the approach of publishing 

multiple independent sub-tables in that these sub-tables are linked by the 

buckets in slicing.



Risk Aware Mitigation for MANET Routing Attacks


Abstract:

Mobile Ad hoc Networks (MANET) have been highly vulnerable to attacks 

due to the dynamic nature of its network infrastructure. Among these attacks, 

routing attacks have received considerable attention since it could cause the 

most devastating damage to MANET. Even though there exist several intrusion 

response techniques to mitigate such critical attacks, existing solutions 

typically attempt to isolate malicious nodes based on binary or naive fuzzy 

response decisions.


 However, binary responses may result in the unexpected network partition, 

causing additional damages to the network infrastructure, and naive fuzzy 

responses could lead to uncertainty in countering routing attacks in MANET. In 

this paper, we propose a risk-aware response mechanism to systematically 

cope with the identified routing attacks. Our risk-aware approach is based on 

an extended Dempster-Shafer mathematical theory of evidence introducing a 

notion of importance factors. In addition, our experiments demonstrate the 

effectiveness of our approach with the consideration of several performance 

metrics.


Fig.  Risk-aware response mechanism






Apply to be a Chitika Publisher!

Ranking Model Adaptation For Domain-Specific Search


ABSTRACT:


With the explosive emergence of vertical search domains, applying the broad-

based ranking model directly to different domains is no longer desirable due to 

domain differences, while building a unique ranking model for each domain is 

both laborious for labeling data and time-consuming for training models. In this 

paper, we address these difficulties by proposing a regularization based 

algorithm called ranking adaptation SVM (RA-SVM), through which we can 

adapt an existing ranking model to a new domain, so that the amount of 

labeled data and the training cost is reduced while the performance is still 

guaranteed. Our algorithm only requires the Prediction from the existing 

ranking models, rather than their internal representations or the data from 

auxiliary domains. In addition, we assume that documents similar in the 

domain-specific feature space should have consistent rankings, and add some 

constraints to control the margin and slack variables of RA-SVM adaptively. 

Finally, ranking adaptability measurement is proposed to quantitatively 

estimate if an existing ranking model can be adapted to a new domain. 

Experiments performed over Letor and two large scale datasets crawled from 

a commercial search engine demonstrate the applicabilities of the proposed 

ranking adaptation algorithms and the ranking adaptability 

measurement.

Organizing User Search Histories

Organizing User Search Histories

Abstract:

Users are increasingly pursuing complex task-oriented goals on the Web, such as making travel arrangements, managing finances or planning purchases. To this end, they usually break down the tasks into a few co-dependent steps and issue multiple queries around these steps repeatedly over long periods of time. To better support users in their long-term information quests on the Web, search engines keep track of their queries and clicks while searching online. In this paper, we study the problem of organizing a user’s historical queries into groups in a dynamic and automated fashion. Automatically identifying query groups is helpful for a number of different search engine components and applications, such as query suggestions, result ranking, query alterations, sessionization, and collaborative search. In our approach, we go beyond approaches that rely on textual similarity or time thresholds, and we propose a more robust approach that leverages search query logs. We experimentally study the performance of different techniques, and showcase their potential, especially when combined together.

Algorithm Used:

Page Rank Algorithms

MULTIPARTY ACCESS CONTROL FOR ONLINE SOCIAL NETWORKS:MODEL AND MECHANISMS


ABSTRACT:


Online social networks (OSNs) have experienced tremendous growth in recent 

years and become a de facto portal for hundreds of millions of Internet users. 

These OSNs offer attractive means for digital social interactions and 

information sharing, but also raise a number of security and privacy issues. 

While OSNs allow users to restrict access to shared data, they currently do not 

provide any mechanism to enforce privacy concerns over data associated with 

multiple users. To this end, we propose an approach to enable the protection of 

shared data associated with multiple users in OSNs. We formulate an access 

control model to capture the essence of multiparty authorization requirements, 

along with a multiparty policy specification scheme and a policy enforcement 

mechanism. Besides, we present a logical representation of our access control 

model which allows us to leverage the features of existing logic solvers to 

perform various analysis tasks on our model. We also discuss a proof-of-

concept prototype of our approach as part of an application in Facebook and 

provide usability study and system evaluation of our method.

Handwritten Chinese Text Recognition by Integrating Multiple Contexts


Abstract:


            This paper presents an effective approach for the offline recognition of 

unconstrained handwritten Chinese texts. Under the general integrated 

segmentation-and-recognition framework with character oversegmentation, 

we investigate three important issues: candidate path evaluation, path search, 

and parameter estimation.For path evaluation,we combine multiple contexts 

(character recognition scores, geometric and linguistic contexts) from the 

Bayesian decision view, and convert the classifier outputs to posterior 

probabilities via confidence transformation. In path search, we use a refined 

beam search algorithm to improve the search efficiency and, meanwhile, use a 

candidate character augmentation strategy to improve the recognition 

accuracy. The combining weights of the path evaluation function are optimized 

by supervised learning using a Maximum Character Accuracy criterion. We 

evaluated the recognition performance on a Chinese handwriting database 

CASIA-HWDB, which contains nearly four million character samples of 7,356 

classes and 5,091 pages of unconstrained handwritten texts. The 

experimental results show that confidence transformation and combining 

multiple contexts improve the text line recognition performance significantly. 

On a test set of 1,015 handwritten pages, the proposed approach achieved 

character-level accurate rate of 90.75 percent and correct rate of 91.39 

percent, which are superior by far to the best results reported in the literature.


System diagram of handwritten Chinese text line recognition

A page of handwritten Chinese text

Footprint: Detecting Sybil Attacks in Urban Vehicular Networks

Abstract:    

In urban vehicular networks, where privacy, especially the location privacy of  

anonymous vehicles is highly concerned, anonymous verification of vehicles is 

indispensable. Consequently, an attacker who succeeds in forging multiple 

hostile identifies can easily launch a Sybil attack, gaining a disproportionately 

large influence. In this paper, we propose a novel Sybil attack detection 

mechanism, Footprint, using the trajectories of vehicles for identification while 

still preserving their location privacy. More specifically, when a vehicle 

approaches a road-side unit (RSU), it actively demands an authorized message 

from the RSU as the proof of the appearance time at this RSU. We design a 

location-hidden authorized message generation   scheme for two objectives: 

first, RSU signatures on messages are signer ambiguous so that the RSU 

location information is concealed from the resulted authorized message; 

second, two  authorized messages signed by the same RSU within the same 

given period of time (temporarily linkable) are recognizable so that they can 

be used for identification. With the temporal limitation on the likability of two 

authorized messages, authorized messages used for long-term identification 

are prohibited. With this scheme, vehicles can generate a location-hidden 

trajectory for location-privacy-preserved identification by collecting a 

consecutive series of authorized   messages. Utilizing social relationship among 

trajectories according to the similarity definition of two trajectories, Footprint 

can recognize and therefore dismiss “communities” of Sybil trajectories. 

Rigorous security analysis and extensive trace-driven simulations demonstrate 

the efficacy of Footprint.


The design of a Sybil attack detection scheme in urban vehicular networks should achieve three goals:


1. Location privacy preservation: a particular vehicle would not like to 

expose  its location information to other vehicles and RSUs as well since such 

information can be confidential. The detection scheme should prevent the 

location information of vehicles from being leaked.

2. Online detection: when a Sybil attack is launched, the detection scheme 

should react before the attack has terminated. Otherwise, the attacker could 

already achieve its purpose.

3. Independent detection: the essence of Sybil attack happening is that the 

decision is made based on group negotiations. To eliminate the possibility that 

a Sybil attack is launched against the detection itself, the detection should be 

conducted independently by the verifier without collaboration with others.


Fast Data Collection in Tree Based Wireless Sensor Networks


Abstract:

We investigate the following fundamental question - how fast can information 

be collected from a wireless sensor network organized as tree? To address 

this, we explore and evaluate a number of different techniques using realistic 

simulation models under the many-to-one communication paradigm known as 

converge cast. We first consider time scheduling on a single frequency channel 

with the aim of minimizing the number of time slots required (schedule length) 

to complete a converge cast. Next, we combine scheduling with transmission 

power control to mitigate the effects of interference, and show that while 

power control helps in reducing the schedule length under a single frequency, 

scheduling transmissions using multiple frequencies is more efficient. We give 

lower bounds on the schedule length when interference is completely 

eliminated, and propose algorithms that achieve these bounds. We also 

evaluate the performance of various channel assignment methods and find 

empirically that for moderate size networks of about 100 nodes, the use of 

multi-frequency scheduling can suffice to eliminate most of the interference. 

Then, the data collection rate no longer remains limited by interference but by 

the topology of the routing tree. To this end, we construct degree-constrained 

spanning trees and capacitated minimal spanning trees, and show significant 

improvement in scheduling performance over different deployment densities. 

Lastly, we evaluate the impact of different interference and channel models on 

the schedule length.


Algorithm used:

1. BFS TIME SLOT ASSIGNMENT
2. LOCAL-TIME SLOT ASSIGNMENT

Efficient Fuzzy Type Ahead Search in XML Data


Abstract:       


In a traditional keyword-search system over XML data, a user composes a 

keyword query, submits it to the system, and retrieves relevant answers. In 

the case where the user has limited knowledge about the data, often the user 

feels “left in the dark” when issuing queries, and has to use a try-and-see 

approach for finding information. In this paper, we study fuzzy type-ahead 

search in XML data, a new information-access paradigm in which the system 

searches XML data on the fly as the user types in query keywords. It allows 

users to explore data as they type, even in the presence of minor errors of 

their keywords. Our proposed method has the following features: 


1) Search as you type: It extends Auto complete by supporting queries with multiple keywords in XML data. 

2) Fuzzy: It can find high-quality answers that have keywords matching query keywords approximately. 

3) Efficient: Our effective index structures and searching algorithms can achieve a very high interactive speed. 

We study research challenges in this new search framework. We propose 

effective index structures and top-k algorithms to achieve a high interactive 

speed. We examine effective ranking functions and early termination 

techniques to progressively identify the top-k relevant answers. We have 

implemented our method on real data sets, and the experimental results show 

that our method achieves high search efficiency and result quality.

TRUST MODELING IN SOCIAL TAGGING OF MULTIMEDIA CONTENT


ABSTRACT:

                Tagging in online social networks is very popular these days, as it 

facilitates search and retrieval of multimedia content. However, noisy and 

spam annotations often make it difficult to perform an efficient search. Users 

may make mistakes in tagging and irrelevant tags and content may be 

maliciously added for advertisement or self-promotion. This article surveys 

recent advances in techniques for combatting such noise and spam in social 

tagging. We classify the state-of-the-art approaches into a few categories and 

study representative examples in each. We also qualitatively compare and 

contrast them and outline open issues for future research.



CONCLUSION:                               
                               
In this article, we dealt with one of the key issues in social tagging systems: 

combatting noise and spam. We classified existing studies in the literature into 

two categories, i.e., content and user trust modeling. Representative 

techniques in each category were analyzed and compared. In addition, existing 

databases and evaluation protocols were re viewed. An example system was 

presented to demonstrate how trust modeling can be particularly employed in 

a popular application of image sharing and geotagging. Finally, open issues and 

future research trends were prospected. As online social networks and content 

sharing services evolve rapidly, we believe that the research on enhancing 

reliability and trustworthiness of such services will become increasingly 

important.

Self Adaptive Contention Aware Routing Protocol for Intermittently Connected Mobile Networks


Abstract:

            This paper introduces a novel multi-copy routing protocol, called Self 

Adaptive Utility-based Routing Protocol (SAURP), for Delay Tolerant Networks 

(DTNs) that are possibly composed of a vast number of devices in miniature 

such as smart phones of heterogeneous capacities in terms of energy resources 

and buffer spaces. SAURP is characterized by the ability of identifying potential 

opportunities for forwarding messages to their destinations via a novel utility 

function based mechanism, in which a suite of environment parameters, such 

as wireless channel condition, nodal buffer occupancy, and encounter statistics, 

are jointly considered. Thus, SAURP can reroute messages around nodes 

experiencing high buffer occupancy, wireless interference, and/or congestion, 

while taking a considerably small number of transmissions. The developed 

utility function in SAURP is proved to be able to achieve optimal performance, 

which is further analyzed via a stochastic modeling approach. Extensive 

simulations are conducted to verify the developed analytical model and 

compare the proposed SAURP with a number of recently reported encounter-

based routing approaches in terms of delivery ratio, delivery delay, and the 

number of transmissions required for each message delivery. The simulation 

results show that SAURP outperforms all the counterpart multi-copy encounter-

based routing protocols considered in the study.

Packet-Hiding Methods for Preventing Selective Jamming Attacks


Abstract:

The open nature of the wireless medium leaves it vulnerable to intentional 

interference attacks, typically referred to as jamming. This intentional 

interference with wireless transmissions can be used as a launchpad for 

mounting Denial-of-Service attacks on wireless networks. Typically, jamming 

has been addressed under an external threat model. However, adversaries 

with internal knowledge of protocol specifications and network secrets can 

launch low-effort jamming attacks that are difficult to detect and counter. In 

this work, we address the problem of selective jamming attacks in wireless 

networks. In these attacks, the adversary is active only for a short period of 

time, selectively targeting messages of high importance. We illustrate the 

advantages of selective jamming in terms of network performance degradation 

and adversary effort by presenting two case studies; a selective attack on TCP 

and one on routing.We show that selective jamming attacks can be launched 

by performing real-time packet classification at the physical layer. To mitigate 

these attacks, we develop three schemes that prevent real-time packet 

classification by combining cryptographic primitives with physical-layer 

attributes. We analyze the security of our methods and evaluate their 

computational and communication overhead.




Modules:-
1. Network module
2. Real Time Packet Classification
3. Selective Jamming Module
4. Strong Hiding Commitment Scheme (SHCS)
5. Cryptographic Puzzle Hiding Scheme (CPHS)