for Journals by Title or ISSN
for Articles by Keywords
help
Followed Journals
Journal you Follow: 0
 
Sign Up to follow journals, search in your chosen journals and, optionally, receive Email Alerts when new issues of your Followed Journals are published.
Already have an account? Sign In to see the journals you follow.
Journal Cover International Journal of Research in Computer and Communication Technology
  [5 followers]  Follow
    
  This is an Open Access Journal Open Access journal
   ISSN (Print) 2320-5156 - ISSN (Online) 2278-5841
   Published by Suryansh Publications Homepage  [2 journals]
  • The Augmented De-duplication Method over Cloud Environment with Enhanced
           Self-assurance

    • Authors: A Sharath Chandra, Reddy Sowjanya, A Mary Sowjanya
      Abstract: Now a day’s use of cloud storage is expanding and to conquer expanding data issue, Data deduplication systems are utilized. Also the Cloud storage administration is given by outsider cloud suppliers along these lines security of data is required. Data Deduplication systems can't be connected specifically with security components. Subsequently here in this paper we would talk about data deduplication systems alongside securing methods consequently framing secure deduplication. De-duplication of data is the way toward erasing redundant duplicates of put away data. It is single case storage. It accomplishes security necessities of data secrecy in the cloud. As de-duplication has capacity to enhance storage use, it is most famous in scholastic and in addition industry territory. With the advantages and prominence of de - duplication it experiences the issue, for example, data unwavering quality. Already, existed frameworks have single server setting, which is not proficient to protect just single duplicate of the data in it because of security issue. Data protection is an exceptionally difficult issue; it emerges when more delicate data is outsourced in cloud framework. There is issue with encryption method which is utilized by existing framework, which required distinctive figure writings for various clients to share indistinguishable data. Today, there is need of accomplishing data secrecy and dependability in conveyed framework by protecting data security prerequisites. Circulated de-duplication frameworks are productive in which data squares are spread crosswise over numerous cloud servers.
      PubDate: 2016-09-20
      Issue No: Vol. 5 (2016)
       
  • Simulation of Software Defined Radio for OFDM Transceivers in VHDL

    • Authors: Ch. Gangadhar, D Suresh
      Abstract: radio communication systems; one of the methods to achieve it is through Software Defined Radio (SDR) technology. In SDR technology, during runtime hardware is made reconfigurable by adjusting the system parameters by using software. The colossal adaptability highlight of SDR frameworks encourages the execution and experimentation of OFDM frameworks with less cost and exertion, contrasted with the usage of the entire framework in equipment. In Orthogonal Frequency Division Multiplexing (OFDM) is a transmission strategy which guarantees proficient usage of the range by permitting cover of bearers. OFDM is a mix of regulation and multiplexing that is utilized as a part of the transmission of data and information. Contrasted and alternate remote transmission procedures like Frequency Division Multiple Access (FDMA), Code Division Multiple Access (CDMA), OFDM has various points of interest like high ghastly thickness, its power to channel blurring, its capacity to defeat a few radio weakness components, for example, impact of AWGN, drive clamor, multipath blurring, and so on. Because of this it finds wide application in Digital Audio Broadcasting (DAB), Digital Video Broadcasting (DVB), and Wireless LAN. A large portion of the remote LAN measures like IEEE 802.11a or IEEE 802.11g utilize the OFDM as the primary multiplexing plan for better utilization of range. Truth be told in the 4G media transmission framework OFDMA is the foundation of it. This project deals with the simulation of Software Defined Radio for OFDM Transceiver using the tools of XILINX ISE using VHDL. Software Defined Radio (SDR) architecture for OFDM Transceiver  is designed using the VHDL and implemented on Field Programmable Gate Array (FPGA) using Xilinx ISE and Model Sim Software Tools.
      PubDate: 2016-09-20
      Issue No: Vol. 5 (2016)
       
  • Data Retrieval By Affinity Propagation Clustering Based On Message Passing

    • Authors: B. Bharathi, P. Madhavi Latha
      Abstract: The objective is to find among all partitions of the data set according to some quality measure. Proclivity spread is a low mistake, fast, adaptable, and amazingly straightforward bunching calculation that might be utilized as a part of framing groups of members for business recreations and experiential activities, and in sorting out members inclinations for the parameters of reenactments. An effective Affinity Propagation calculation ensures the same bunching result as the first calculation after merging. The heart of our methodology is (1) to prune superfluous message trades in the cycles and (2) to process the merging estimations of pruned messages after the emphasess to decide bunches.
      PubDate: 2016-09-16
      Issue No: Vol. 5 (2016)
       
  • Differential Privacy Preserving Algorithm For Data Anonymization

    • Authors: Ch. Nanda Krishna, T. Kalyani, K. Madhavi
      Abstract: Presently a day the amplifying of data utilization and minimizing protection danger are two clashing objectives. The organization required arrangement of change at the time of discharge information. While deciding the best arrangement of changes has been the concentrate on the broad work in the database group, the scalability and security are significant issues while information change. The privacy preserving method called K-anonymity is acquainted with conquer this issue. In this procedure, every one of the data records are divided into some number of datasets and each record in a specific dataset must be vague with different records in that dataset. But, this technique is vulnerable to some attacks. Thus another method called l-diversity qualities is acquainted with keep away from foundation assaults on the anonymized information.
      PubDate: 2016-09-16
      Issue No: Vol. 5 (2016)
       
  • Paddy Seeds Categorizing Based on Morphological Feature Using Data Mining
           Algorithms

    • Authors: Komatineni. Divya, Y. Sangeetha
      Abstract: Data mining is one of the emerging research fields in agriculture. Paddy is the most important and extensively grown food crop in the world. Paddy is the staple food of more than 60 percent of the world population .Classification of paddy seed had significant importance in determining the market value of paddy varieties. Paddy class identification is also necessary for plant breeders to predict yield and quality. In this work, our main aim is to classify the different types of paddy seeds from the images. In order to do this, collect different types of paddy seed images and apply morphological features like Area, Perimeter, Circularity, Elongation, Rectangularity and etc. From this feature extraction measures feature selection will be takes place. These features help to classify the paddy seeds. Classification is a data mining operation that allots items in a collection to target families or classes. The aim of classification is to predict the target family for each event in the data. Finally, we apply data mining classification technique SVM and Genetic algorithm to classify the different paddy seeds from paddy seed images the object is classified by a majority vote of its neighbors.
      PubDate: 2016-09-16
      Issue No: Vol. 5 (2016)
       
  • Cloudorado : The Trustworthy Brokering Conspiracy for Communal Cloud
           Service

    • Authors: V.V.Sunil Kumar, P.Sairam Sekhar
      Abstract: The word Trust is the substance of certainty that something will or won't happen in an anticipated or guaranteed way. The empowering of certainty is upheld by recognizable proof, confirmation, responsibility, approval, and accessibility." A trust mindful service brokering plan for proficient coordinating cloud services to fulfill different client demands for Multiple Cloud Collaborative Services. Initial, a trusted outsider based service brokering design is proposed for numerous cloud situations, in which the T-broker goes about as a middleware for cloud trust administration and service coordinating. At that point, T-broker uses a half and half and versatile trust model to figure the general trust level of service assets, in which trust is characterized as a combination assessment result from adaptively consolidating the direct checked confirmation with the social criticism of the service assets. Tbroker utilizes a lightweight criticism component, which can viably decrease organizing hazard and enhance framework productivity. The trial results demonstrate that, contrasted and the current methodologies, our T-broker yields great results in numerous ordinary cases, and the proposed framework is vigorous to manage different quantities of element service conduct from various cloud foundations.
      PubDate: 2016-09-13
      Issue No: Vol. 5 (2016)
       
  • Performance evaluation of variable channels for WiMAX network

    • Authors: Saif A Abdulhussein
      Abstract: WiMAX can offers or adds some qualifications to the wireless technologies such as long coverage area and supports different types Quality of service to the customers' and high data rate. The long distance area ofaWiMAX coming from the high transmit energy and from the structure of the network which is similar, to. mobile network. This technology takes to support flexibility, efficiency and various requirements of QoS over a variety of different services and environments several provisioning and mechanisms are provided in the standard. Voice over Internet Protocola(VoIP) through WiMAX is found to be very promising. Inathis paper, simulativeadiscussion for VoIP in WiMAXanetwork have been done consideringathe effect of Modulation coding (MC) and variable channelsamechanisms on the QoS performance types of scheduling like rtPS,aertPS and UGS using network simulator is called OPNET. The discussion have been done in terms of importantaQoS parametersalike average throughput, data dropped, end to end delay, WiMAX load., The results present that the best service type of the MCs that would be enhanced the QoS performance of the WiMAX networkaisafound that the modulation coding type Adaptive code has good for WiMAX measurements (Data Dropped, through put and WiMAX Load) and also found the QAM3/4 64 code has good for packetaendatoaend Delay over the Vehicular environment. Asafor schedulingaservices it has been found that the best is the rtPS.
      PubDate: 2016-09-11
      Issue No: Vol. 5 (2016)
       
  • Improved File Search And Sharing Mechanism In Structured P2P Systems

    • Authors: Satya Priyanka Mandapati, V. Ravi Kishore
      Abstract: — File querying is important functionality which indicates the performance of p2p system. To improve file query performance, cluster the common interested peers based on physical proximity. Existing methods are dedicated to only unstructured p2p systems and they don’t have strict policy for topology construction which decreases the file location efficiency. In this project, we propose a structured p2p file sharing system which is clustered and aware of proximity. It forms a cluster based on node proximity as well as a sub-cluster with the nodes having common interest. A novel lookup function named as DHT and file replication algorithm which supports efficient file lookup and access is utilized. To reduce the overheads and file searching delays, the file querying may become inefficient due to the sub-interest super node overload/failure. Even though the sub-interest based file querying improves querying efficiency, it is still not sufficiently scalable when there are a very large number of nodes in a sub-interest group. We then propose a distributed intra-sub-cluster file querying method in order to further improve the file querying efficiency.
      PubDate: 2016-09-10
      Issue No: Vol. 5 (2016)
       
  • The Design Based Content Emission Recognition for Trusted Content
           Conveyance Systems

    • Authors: M.J.M Reddy
      Abstract: Because of the expanding prominence of sight and multimedia applications and administrations as of late, the issue of trusted video conveyance to counteract undesirable content spillage has, in reality, get to be basic. While safeguarding client security, routine frameworks have tended to this issue by proposing strategies in light of the perception of gushed movement all through the system. These routine frameworks keep up high identification exactness while duplicating with a percentage of the activity variety in the system, be that as it may, their discovery execution generously debases inferable from the critical variety of video lengths. In this paper, we concentrate on beating this issue by proposing a novel content spillage location plot that is vigorous to the variety of the video length. By contrasting recordings of various lengths, we decide a connection between the length of recordings to be thought about and the comparability between the looked at recordings. Along these lines, we upgrade the discovery execution of the proposed plot even in a domain subjected to variety long of video. Through a proving ground analyze, the viability of our proposed plan is assessed as far as variety of video length, delay variety, and packet misfortune. While saving client security, ordinary frameworks have tended to this issue by proposing strategies taking into account the perception of spilled activity all through the system. We concentrate on defeating this issue by proposing a novel content spillage location conspire that is powerful to the variety of the video length. Through a proving ground analyze, the adequacy of our proposed plan is assessed as far as variety of video length, delay variety, and packet misfortune.
      PubDate: 2016-09-08
      Issue No: Vol. 5 (2016)
       
  • An Inventive Approach for Challenging AI Problems in graphical passwords:
           CAPTCHA

    • Authors: T.Y Ramakrushna
      Abstract: In Information security, client confirmation has most vital regions. The vast majority of the web application gives learning based confirmation which incorporates alphanumeric passwords and in addition graphical passwords. Graphical secret word assumes a critical part for client in security perspective. The current framework gives security to validation in cloud by utilizing graphical passwords which has restriction as username in content configuration. The proposed framework gives better confirmation by handling the username or client id utilizing PCCP (Pervasive Cued Click Point) procedure. The watchword is handled utilizing CaRP (Captcha as graphical Password) procedure. New space-time verification systems are proposed, in this anticipate. Being developed of confirmation procedures, captcha as Graphical Passwords validation is another course .Today's data frameworks is in a need of evident recognizable proof between imparting substances. The Process of substance ID as a rule is likewise called as verification. The fundamental capacity of this anticipate is to take a shot at Banking Security. On hard numerical issues numerous security primitives are based. Utilizing the hard AI issues for security, it is rising as an energizing new worldview, however it has been underexplored. In this anticipate, we have depicted around another security primitive which depends on hard AI issues which is a framework we call as Captcha as graphical passwords (CaRP). Captcha and a graphical secret key plan both are a piece of CaRP. Various security issues, for example, internet speculating assaults, transfer assaults when joined with double view innovations, shoulder-surfing assaults is tended to in CaRP.
      PubDate: 2016-09-04
      Issue No: Vol. 5 (2016)
       
  • Imparting Data Mining Over Big Data to Cooperative Data Sharing

    • Authors: Ch.S.K.V.R Naidu
      Abstract: Data has turned into a fundamental part of each economy, industry, association, business capacity and person. Big Data is a term used to distinguish the datasets that whose size is past the capacity of normal database programming apparatuses to store, oversee and examine. The Big Data present remarkable computational and factual difficulties, including versatility and capacity bottleneck, clamor collection, spurious relationship and estimation mistakes. These difficulties are recognized and require new computational and factual worldview. This paper introduces the writing audit about the Big data Mining and the issues and difficulties with accentuation on the recognized components of Big Data. It additionally talks about a few techniques to manage big data. This paper portrays the voyage of big data beginning from data mining to web mining to big data. It examines each of this strategy in short furthermore gives their applications. It expresses the significance of mining big data today utilizing quick and novel methodologies. The major aim of this paper is to make a study on the idea Big data and its application in data mining. The paper for the most part focusing diverse sorts of big data and its application in learning disclosure. Utilizing Big Data mining associations can extricate valuable data from these big pool of data or floods of data. By investigation of this datasets valuable insights can be extricated. Disregarding the ease of use of Big Data, there are a few difficulties identified with it. This test is turning out to be most transformative region of examination for the coming years. The paper introduces a diagram of the point, techniques and figure to what's to come.  
      PubDate: 2016-09-04
      Issue No: Vol. 5 (2016)
       
  • Secure Adaptive Privacy Policy Prediction (A3P) Framework Using Policy
           Setting

    • Authors: Seshadrireddy Ramula, P. Viswanatha Reddy
      Abstract: With the inflate size of images customers offer through social locales, keeping up security has turned into a noteworthy issue, as exhibited by a late flood of pitched episodes where customers unintentionally shared individual data. In light of these circumstances, the need of apparatuses to help customers control access to their mutual substance is clear. Concern tending to this need, we propose an Adaptive Privacy Policy Prediction (A3P) framework to assist customers make security settings for their images. We check at the part of social connection, picture substance, and metadata as would be prudent markers of customers' protection inclinations. We propose a two-level system which as indicated by the customer's accessible history on the site, decides the best accessible security arrangement for the customer's images being transferred. Our answer depends on a picture order system for picture classifications which might be connected with comparable strategies, and on an arrangement expectation calculation to consequently produce a strategy for each recently transferred images, like wise as indicated by customers' social elements. After some time, the created arrangements will take after the development of customers' protection state of mind. We give the consequences of our broad assessment more than 5,000 strategies, which show the viability of our framework, with forecast correct nesses more than 90 percent.
      PubDate: 2016-08-31
      Issue No: Vol. 5 (2016)
       
  • Secure And Intelligent Compression With Improved Reliability

    • Authors: D. Keerthipriya, D. Humeera
      Abstract: Data deduplication is a technique for eliminating duplicate copies of data, and has been widely used in cloud storage to reduce the storage space and upload the bandwidth. Whenever There is only one copy for each file stored in cloud even if such a file is owned by a huge number of users. As a result, de duplication system improves and increase storage with a utilization while reducing reliability. Furthermore, the challenge of privacy for sensitive data also arises when they are outsourced by users to cloud storage and Aiming to address are given the security challenges, this paper makes the first attempt to formalize the conception of distributed reliable de duplication system. We propose new distributed de duplication systems with higher reliability in which the data chunks are distributed across multiple cloud servers. The security requirements of the data confidentiality and tag consistency are also achieved by introducing a deterministic with secret sharing scheme in distributed storage those systems, instead of using convergent encryption as in previous de duplication systems. The Security analysis is a demonstrates that our de duplication systems are secure and private in terms of the definitions specified in the proposed security model. As a proofof concept, we implement the proposed systems and demonstrate that the incurred overhead is very limited in realistic environments
      PubDate: 2016-08-31
      Issue No: Vol. 5 (2016)
       
  • Scalable Self-Conscious Spectral Clustering

    • Authors: G.Veerendra Nath, K.Suresh Kumar Reddy
      Abstract: Spectral clustering algorithms constrained spectral clustering (CSC) algorithm by encoding information in the party have shown great promise in improving the accuracy of clustering. However, the inability to manage large datasets, algorithms and existing CSC to a claim.   In this paper, we have a scalable and efficient growth of CSC by integrating sparse coding algorithm is based on the structure of the constrained generalized reduction in the structure of the graph. To this end, we have to solve this problem, create a scalable, simplified and forced to cut based on a closed-form mathematical analysis.   We are in the most efficient solution to this problem is a simple eigen value can reduce the problem. We also have a large dataset to describe Kashmir CSC algorithm handle a theoretical way, and. The experimental results of the proposed algorithm is very cost-effective to set the benchmark in the sense that (1) the information at hand, we demonstrated significant improvement compared to baseline accuracy can ask is; (2) The lower the score, time, art that was close enough to a higher state of clustering.   Along with this functionalities, we also implement application level auditing. Here, we capture who and when a particular person is loged in in our application. we will store all persons data like who logged in into our application, when he logged in, when logged out, what he is doing in our application. If somebody has done something illegal or missbehaviour with our application, we can find him very easy with these details. The more details about this is mentioned below.
      PubDate: 2016-08-31
      Issue No: Vol. 5 (2016)
       
  • Advanced Progressive De-duplication System

    • Authors: K.Bhargava Kumar Reddy, P. Viswanatha Reddy
      Abstract: Duplicate detection is the process of identifying multiple representations of same real world entities. Today, duplicate detection methods need to process ever larger datasets in ever shorter time: maintaining the quality of a dataset becomes increasingly difficult. We present two novel, progressive duplicate detection algorithms that significantly increase the efficiency of finding duplicates if the execution time is limited: They maximize the gain of the overall process within the time available by reporting most results much earlier than traditional approaches. Comprehensive experiments show that our progressive algorithms can double the efficiency over time of traditional duplicate detection and significantly improve upon related work.   Along with this, we also implement a secure login to our application. As a result no one can login using ones password. In general, the data that is registered in applications is stored in database (first name, last name, age , gender, dob , . . . including  user name and password )   The main intension of this one is to make our data secure. If the data base is hacked then there may be a chance to get all registration details and miss use with our user name and password. We implemented crypto system in our  application for this purpose. The more details about this is available below.
      PubDate: 2016-08-31
      Issue No: Vol. 5 (2016)
       
  • Secure Diversification of XML Keyword Search Based On Its Different
           Contexts

    • Authors: P. Dilshad, D. Humeera
      Abstract: While keyword inquiry enables common clients to look endless measure of information, the equivocalness of catchphrase question makes it hard to viably answer watchword inquiries, particularly for short and unclear catchphrase questions. To address this testing issue, in this paper we propose a methodology that consequently differentiates XML catchphrase look in light of its distinctive settings in the XML information. Given a short and obscure watchword inquiry and XML information to be looked, we first infer catchphrase seek hopefuls of the question by a basic element determination model. And after that, we outline a compelling XML catchphrase look broadening model to quantify the nature of every applicant. After that, two proficient calculations are proposed to incrementally figure top-k qualified inquiry competitors as the broadened seek goals. Two choice criteria are focused on: the k chose question hopefuls are most significant to the given inquiry while they need to cover maximal number of particular results. Finally, a complete assessment on genuine and engineered information sets shows the adequacy of our proposed broadening model and the effectiveness of our calculations.
      PubDate: 2016-08-30
      Issue No: Vol. 5 (2016)
       
  • The Keyword query search Diversifies Innovative Improved Mapping on Xml
           Data

    • Authors: K ChandraKala, D Kalpana
      Abstract: Several current XML keyword query approaches receive the sub trees established at the least regular predecessor of the keyword matching nodes as the essential result units. The auxiliary connections among XML nodes are exorbitantly accentuated in these methodologies however the setting implications of XML nodes are not considered important. To change this circumstance and enhance the matching between clients' inquiry expectations and last query results, we propose a two-stage XML keyword query calculation. Keyword inquiry permits standard clients to hunt a lot of data, the equivocalness of keyword query makes it hard to react viably keyword inquiries, particularly for querys short and ambiguous keywords. To determine this issue a test, in this paper a methodology that consequently differentiate ca XML seek keyword in light of their diverse settings in the XML data is proposed. Given an inquiry keyword little and dubious and XML data to be looked, first it is gotten from the keyword seek query hopefuls by a basic model component choice. And after that, we plan a keyword seek expanded XML model viable alert of measuring the nature of every hopeful. After that, two productive calculations are proposed to compute incrementally subdued top-k inquiry competitors as assorted pursuit expectations. Two criteria are focused on: the chose applicant's meeting are most important to the query given before they need to cover the greatest number of various results. Finally, a full appraisal of the arrangements of genuine and engineered data exhibits the adequacy of our model of enhancement and proficiency of proposed algorithms.
      PubDate: 2016-08-29
      Issue No: Vol. 5 (2016)
       
  • The Exploration of Filtering Mechanism for Unwanted Messages on OSN User
           Wall Using CBMF

    • Authors: V.Sampath Kumar, M.Chinna Rao, A.V.S.N. Murthy
      Abstract: One important issue in today's Online Social Networks (OSNs) is to give clients aptitude to control the messages posted all alone individual space to maintain a strategic distance from that unapproved information is shown. OSNs give little assistance to these requirements. In this postulation, i propose a framework permit OSN clients to have a straight control on the messages posted on their walls. it is accomplished through an adaptable principle based framework, that grant clients to alter the separating criteria to be put to their walls, and a Machine Learning-based delicate classifier consequently naming messages in persist of substance based sifting. Initially lead an arrangement of substantial scale estimations with an accumulation of accounts watch the distinction among human, bot, and cyborg as far as tweeting conduct, tweet substance, and account properties. Our trial assessment exhibits the adequacy of the proposed grouping framework furthermore we utilize design coordinating and message order calculation for exact results. To conquer this issue, we propose a framework permitting OSN clients to have an immediate control on the messages posted on their walls. This is accomplished through an adaptable guideline based framework, that permits clients to redo the sifting criteria to be matter-of-reality to their walls, and a Machine Learning based delicate classifier consequently naming messages in substance based separating.
      PubDate: 2016-08-29
      Issue No: Vol. 5 (2016)
       
 
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
Fax: +00 44 (0)131 4513327
 
About JournalTOCs
API
Help
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-2016