TEAM IOTA 2010 EDITOR-IN-CHIEF
CONCEPT AND DESIGN Sakshi Yadav TEAM MEMBERS
Vipul Gupta Sakshi Arora Twinkle Gupta Sanchita Gupta Gautam Prabhakar Anubha Gupta
EDITOR’S NOTE We are surrounded by technology. It is everywhere we go, from the humble television set to complex machinery running a nuclear power station. Understanding and knowing what goes into it makes a lot of difference to a lay man. Such technology needs a constant development and change; it’s a motion that never ceases. Brains from all over the world are cogged to generate such things in their heads, they twirl and turn and churn out vast amounts of complexities that go beyond what we see and know of. As technical graduate students, we are earmarked to enter into this long production setup as we learn and develop innovations and tweak around the gadgetry. Being mere infants in this process, the technical minds need a platform to showcase their “stuff” on and IOTA helps provide them with one. As you flip through the pages, you will come across a myriad display of what the minds of the DTU students have been up to. It is awe-inspiring to behold how much they gather in their few years in the institute and the amount they give back with their dedication and learned activities. IOTA 2010 proudly showcases one of DTU’s best vehicle teams like the Mini Baja and helps decipher the technical jargon of technologies and innovations like vector quantization and digital image watermarking. Learn more about the ever expanding web with an article on web accessibility and more on binary numbers with a piece on binary digits. We, the Editorial team, present to you IOTA...
Incredible Odyssey of Technical Aspiration
CONTENTS VICE CHANCELLOR’S ARTICULATION
BRANCH COUNSELLOR’S ADDRESS
AUXILIARY CONTROLLED SVS FOR DAMPING SSR IN A SERIES COMPENSATED POWER SYSTEM
DIGITAL IMAGE WATERMARKING
IR SENSITITVITY ANALYSIS OF TRANSMISSION LINES USING MATLAB AND SIMULINK MODEL
MICROBIAL FUEL CELL
MANAGING VALUE CHAINS: BEYOND SUPPLY CHAIN
50 GBPS PHOTONICS LINK
A DIFFERENT LOOK AT THE BINARY NUMBERS
VISIBLE LIGHT COMMUNICATION
IEEE STUDENT COUNCIL 2010-11
Incredible Odyssey of Technical Aspiration
VICE CHANCELLOR’S ARTICULATION Prof. P. B. Sharma
(Ph.D, FIE, FAeroS, FWAPS, Founder Vice Chancellor)
It brings me immense pleasure to note the release of yet another issue of IOTA: The annual Technical Journal of IEEE Student branch. Delhi Technological University has proved its merit in the academic circles gaining a reputation as one of India’s most prestigious universities. Completing this one year as a university, DTU has stood strong and gained much publicity and accolades in all the sectors it has explored. Learning is the foundation on which this university is based and the whole DTU family is a learner in this process. Innovation comes when one dwells deep into the process of learning, asking, questioning and exploring. As a result, the university has many teams and achievements under its belt. DTU/ DCE has been ranked highly by all the prominent surveys of Engineering Institutions in the Country including ZEE News-DNA (which places DTU/DCE among top ten engineering institutions), Outlook-MDRA (which has accorded ninth position to DTU, among ‘Top 50 Engineering Institutions), India Today (where ranking of DTU has gone up from 15th last year to 13th this year), and HT-C-Fore (which has also placed DTU at 13th position this year, while ranking DTU at 6th position for placements), among others. Being a technical institute, DTU offers many opportunities to students where they can explore their technical side and gain knowledge. As it is known, Knowledge expands from learning and such is the mantra at DTU that students are encouraged and applauded at every small or big achievement of theirs. With a huge infrastructure, student friendly campus and a large resource house of books, laboratories and an ever willing faculty, DTU had as DCE and now as a university buoy the confidence of its students. DTU is also gearing towards becoming a self-sustaining, Green campus. The master plan of University campus has been revisited to focus on green energy technologies, green building architecture, a vehicle free academic zone and waste water recycling. A major thrust on bio-fuel and solar power to meet one-third of campus electricity requirement are in offing. DTU has already has 110 solar street lights and solar water geysers and soon 1MW solar photovoltaic will be installed at DTU, making it the first major solar powered campus. A 20,000 litres capacity waste water recycling plant is also under completion within the DTU campus. “With its academic, research and innovations aligned towards creating the power of science and its integration with the might of technology, the DTU, as a technological university is poised to lead the nation in Science and Technology Education, Research and Innovation. Keeping in mind, the many needs of the students and the urgent aim of a society which flourishes in sociology and human networking, DTU has always kept up the tradition of keeping afloat many societies under its wings. One such technical society is IEEE where students from many different areas get together and explore and innovate. I am pleased to note that once again, IEEE has unfailingly released its technical journal which showcases the many talents of the students of DTU. It is always a pleasure to behold such a magazine which offer an insight into the brain pool of the university which constitutes students participating in various international competitions, many involved in deep research and yet another batch exploring new avenues in technical learning. Wishing the students at the Delhi Technological University a bright and shining future.
BRANCH COUNSELLORâ€™S ADDRESS Prof. Asok Bhattacharyya
IEEE is a student branch which has always been on the front, gaining knowledge and learning new things. As a society it has seen days filled with innovative students flourishing under its wings, all set to fly into the real world. As branch counselor for many years, I have stood a witness to the meteoric rise of the society from being just one amongst many to being the best in many regions. It has always helped students explore new arenas in their quest for knowledge, has encouraged new talent to come up and has lived up to the expectations with which a young student steps into the university and into a society. IEEE boasts of a full calendar of year long activities aimed at targeting every aspect of a students learning. The year begins with Techweek, a weeklong flurry of workshops aimed at introducing the young, fresh minds to the many nuances of the technical world. This is flowed by SPAVE and PANACHE, one a managerial event targeted at the upcoming managerial talents in the college and the other a highly anticipated quiz. The much awaited technical fest, TROIKA is held in February where the students are challenged based on their level of learning in various aspects like designing, mechanical works, software programming and hardware design. As a society IEEE only gives to the students, aiming for them to learn and learn and learn some more. It has always provided them with a platform to interact with the corporate world and create bonds which last for a lifetime. It gives them with a base to work their skills upon, giving them opportunities and international exposure in many arenas and it has always created a family of students which is ever ready to help each other always. In a technical institute, it is imperative that students are technically sound with a knowledge of a wide range of subjects. IOTA is one such publication which helps create such a base for them. It is a collection of the many latent talents hidden in the crevices of the university, deep into their research or holed up in their laboratories. Team IOTA brings to light the matter they have been working on and gives them a platform for their talent and helps them get their research published. The magazine also offers an insight into the world of machines and wires. The technological world had never been as familiar as the many grad students; PG students and faculty team up to write on the new innovations and new technologies in their fields and brings a fresh-out-of school graduate up to date with the happenings in the other world. As you flip through the pages of IOTA, you will be surprised and awed by the talent house of innovation; the brain pool of the university has managed to pull off. I would like to thank the editorial team for all their efforts in collecting the articles and putting together a brilliant piece of work.
CHAIRMAN’S ADDRESS Rahul Batra
On behalf of IEEE-DTU I would like to congratulate Honorable Vice Chancellor Prof. P.B.Sharma on the successful completion of first year of the DTU. I hope the upcoming year reaches greater heights in terms of technology and knowledge with the students more oriented towards development and learning. This year we plan not only to conduct the events round the year but also to develop a platform for students to learn and to increase their scope of interaction with veterans, so that the students can jumpstart their career. Just after Techweek, the Special Interest Groups (SIGs) will take a fresh initiative with past year’s experience and move forward. Another asset for this student branch is the Web Management Group (WMG) which handles our website www.dcetech.com We also plan to develop some Industrial Collaborations which will lay emphasis on conducting workshops by trained industry people and trips to these industries so as to have an insight into the actual working of these industries. Techweek this year will give an immense pleasure to the students as they will enjoy comfort in gaining the knowledge. The showcase of technical as well as non-technical workshops will let them focus on their interests and choose the desired path. There will be a plethora of workshops like Robotics, Embedded Systems, MATLAB, Photoshop etc. It involves our collaboration with several industries like Powergrid and CISCO where professionals from these industries will come to explain how they work and what are the pre-requisites to be there. Some of the other important workshops are Speed mathematics by Alchemist and GD and Personality development by a veteran Mr. Ananth. Apart from technical expertise, one is required to be equally good in aptitude and communication skills. We take care of both these issues and thereby we present to you TECHWEEK’2010. IOTA is the Annual Technical Journal of IEEE DTU Student Branch. It has the ability to inspire the readers with its articles full of technical knowledge and expertise to share. It contains articles from our respected teachers as well as some of the researches being done and the latest technologies which haven’t even touched the market yet. I would like to congratulate the complete team IOTA for making such a great effort in bringing together such a wonderful Journal. Lastly I would like to thank Prof. P.B.Sharma and our Branch Counselor Prof. Asok Bhattacharyya for giving us such a great opportunity to share our knowledge and being there as a constant support and guiding us to the right path.
TECHWEEK HIGHLIGHTS TEAM TECHWEEK
Group Discussion A rich personality is always the one to behold. Your brains will take you far in life and your personality further still. Learn the tricks of the trade from the experts on how to develop your personality and polish it to make you the envy of all. MATLAB and Digital Image Processing Have you ever wondered how exactly does a robot see? What is it that happens behind the cameras in those deep lenses of eyes? Digital Image Processing is what it takes to unravel the mystery. This workshop, clubbed with MATLAB helps explains everything that you need to know about image processing. Am must for all hardware geeks and want to-bee geeks! Speedy Mathematics Shakuntala Devi is a cult figure among mathematicians, known for her almost computer like skills. Now, learn her secret and her tricks of the trade by this workshop on Vedic mathematics. Divide and multiply those astounding figures in your head, juggle numbers as ceaselessly you juggle with your work and watch how numbers turn from your foes to your friends. Web development Undecipher the technical jargon behind those codes and softwares and the secret behind what you see on a website. Watch and learn web development in this interactive session and soon you will be able to develop your own personal site. Need I say more?
Photoshop Photoshop is a name every graphics enthusiast is familiar with. At the Photoshop workshop, learn the nuances of making your very own personal graphic studio, Tweak those photos you have, make posters and designs to the envy of the best artists. With a simple click here and a click there liken yourself to the likes of Picasso and Van gogh. Learn the effects that make an image stand out and dazzle everyone with your skills. And the result is that you can then boast about your skills in graphic designing and carve your way into the huge world of designing where you can land up in web designing, poster making, and magazine designing and lots more!Â Technologies Used in Transmission Lines POWER GRID Corporation is a state owned company responsible for inter-state transmission of electricity. This workshop by the company aims to highlight the various technologies which the country uses in transmission lines. They explain the difficulties faced in encompassing the whole country and how they brought about changes in technologies to their benefit. Networking and Programming If a computer is a coderâ€™s god, then CISCO definitely stands out as the Mecca of such a religion. This corporate giant offers an insight into the huge world of computers and networking with their workshop. It is a dream come true for every geek coding away in his room to experience the leading professionals from CISCO.A workshop you cannot miss!
3-D Animation A visual treat it is and as simple as it looks, it is deceptively easy too. 3D animation workshop showcases the genius of IEEE animation storehouse and helps provide an introduction to animation enthusiasts who plan to further their skills as an animator. Make that squiggly line dance to your tunes and jumble up everything to create your very own animation movie! Embedded Systems A word every hardware enthusiast is familiar with. Embedded system and as snazzy as it sounds, it is a treat for all who wish immerse themselves in the world of sensors and microchips.
TECHDEFENCE Sunny Vaghela
TechDefence, a company started by ethical Hacker Sunny Vaghela rose to prominence with its observation of detecting a loophole in the Yahoo email system. Unbeknownst to all, Yahoo services had a small backdoor in their e-mail system which allowed any type of hacker with a slight change in script to access a person’s most personal emails without any passwords. This allowed the hacker community to let go off the Trojans and the viruses and they simply chose to alter the script and Voila! They had the details of a person’s email. TechDefence explains the case, the person who wants to hack account just needs to send an HTML script in an email which would give access to the targeted email account through the cookies generated by the service provider. Every website that is opened by an Internet user generates cookies containing its data. These cookies are present till the website is in use. The cookies are only deleted when the user closes the website and clears the cache.
hacker. This gives him complete access to the email account. I have tried the same on other service providers, but it could not be done as the loophole lies with the coding of Yahoo mail The menace of this does not just end with hacking of the Yahoo mail account. All the sites, gateways which require Yahoo ID can be accessed after that. Which means, if a person accesses a job site, social networking site or has the Yahoo mail ID as an alternate ID to another mail, it will give access to all. The shocking thing is that the hacker’s IP address is not at all going to be logged on Yahoo server even though he can access the victim’s account. TechDefence notes the vulnerability of the internet age and urges the people to be more carefull.Seeing how lax the websites themselves, are it seems like a difficult situation. With many hackers swarming the Indian database, it has now become imperative that the cyber populace focus on its security and start making internet a secure place.
The uncanny thing is that the HTML script can’t be detected by any phishing filter, spyware filter or the best anti-virus software as it can’t be categorized as a Trojan or a virus considering that it is just a common HTML link. A Yahoo user’s account can get hacked by just clicking on the email sent in by the hacker. The script present in the mail will grab all the cookie information on the browser and send it to the
VECTOR QUANTISATION Rajiv Kapoor, HOD, ECE Dept.
Introduction The objective of digital image compression techniques is the minimization of the number of bits required to represent an image, while maintaining acceptable image quality. The size of the image is determined by the resolution for each pixel for example a 512 pixel x 512 pixel having a resolution of 8 bits/pixel will have a size of 256 KB. The search for efficient image compression techniques ensues owing to the increasing pressure on transmission and storage media due to emergence of high definition multimedia. Vector quantization (VQ) is a very popular lossy data compression method that has seen numerous implementations in image compression systems owing to its simplicity. Recently, image compression is an important methodology to reduce the large amounts of handling image data; such technology can improve the working speed in the transmission and storage of the popular multimedia, video, medical diagnosis and interpretation system. The goal is to obtain an optimum codebook of finite length which can faithfully represent original image. Once such an optimum codebook is obtained the training process is finished. The encoding process of VQ with a selected code vector is to determine the mapping of input image set into finite collection of codebook. When all image training patterns are marked with their relative index to the nearest code vector, the encoding phase is finished. Such codebook is much smaller than the original image data set. Therefore, the purpose of image compression is achieved. In decoding process, the associated subimage is exactly retrieved by the same codebook which has been used in the encoding phase. When each subimage is completely reconstructed, the decoding phase is completed. One of the desirable objectives of the VQ compression technology is to increase the compression rate and achieve a higher fidelity. The higher of the compression rate, decreases the more required memory and the transmission channel bandwidth. A good image compression skill is not only achieving the highest compression rate but also assuring the good quality of the compressed image. The most well-known Linde- Buzo- Gray (LBG) vector quantization algorithm is an iterative learning based VQ method to approach optimal codebooks. LBG algorithm is a Generalized Lloyd type approach for codebook construction. It initializes M code vectors which are first created randomly, which are then processed it-
eratively using Euclidean norm type metric to yield a codebook that minimizes the average distortion between the training patterns and code vectors. Such an approach suffers from two major drawbacks; firstly the performance is much sensitive to choice of initial codebook as well as to the parameters. Secondly, since the Euclidean type metric is not convex, there is a possibility of being stuck in local minima in the codebook design. The hard decision approach imposed by LVQ fails upon the real images comprising of soft patterns. A soft decision scheme based on fuzzy sets is proposed with a Gaussian membership function to estimate the closeness of training vectors to the code vectors. The fuzzy particle swarm optimization (PSO) learning algorithm, combined the fuzzy inference analysis with PSO learning scheme, was proposed. Attempts have been made to improve the PSO performance in recent years. Much work focused on parameters settings of the algorithm and on combining various techniques into the PSO. The SAPSO algorithm is an alternative. In this algorithm there exist two states for each particle, the explorative state and the exploitative state. In the explorative state, it is attracted by the current global best position and its own personal best position. In the exploitation state, it is repelled away from its current personal best position and its personal worst position to search the other promising area. So, each particle is influenced not only by the current global best position and personal best position, but also its personal worst position that mainly depends on its current status. A dynamic adaptation technique is used for SAPSO models where the inertia weight for each particle varies dynamically with its evolution degree and the current swarm evolution degree. PSO surfers from a much lower speed, such a lower convergence speed can be reflected by the lower accuracy. The FSAPSO based learning algorithm in the design of optimum codebook of VQ to build up the image compression system in illustrated in Figure 1. The novel FSAPSOVQ learning algorithm, combined the fuzzy inference analysis with SAPSO learning scheme, overcomes the local minima of FPSOVQ learning algorithm which detects codebooks design to build image compressed system. FSAPSO outperforms in the comparison FPSO, in terms of solution accuracy, convergence speed and algorithm reliability.
VECTOR QUANTISATION Rajiv Kapoor, HOD, ECE Dept.
Figure 1: Flow chart of FSAPSO algorithm for VQ.
Self-Adaptive particle swarm optimization Particle swarm optimization (PSO) is one of the swarm intelligence (SI) algorithms that were first introduced by Kennedy and Eberhart, inspired by swarm behaviors such as birds flocking and fishes schooling. In PSO, an underlying relation exists among the inertia weight, fitness, swarm size and dimension size of solution space, which could be used for accelerating the convergence of the PSO. Each particleâ€™s inertia weight should be self-adjusted according to its own evolution degree and the current swarm evolution degree, which leads to the self-adaptive PSO. Self-Adaptive PSO (SAPSO) uses two main states for each particle, the exploitative state and explorative state. In the exploitative state, particle is attracted by its personal best position and current global best position like the original PSO algorithm. In the explorative state, particle is repelled away from its personal best position and personal worst position to search the un-reached regions. The value of social and cognitive learning rates is calculated for each particle using this state information contained in the evolutionary factor.
Figure 2: Original training image: Lena
Comparing different algorithms
Figure 3: Testing results with Lena image (a) a 256*256 Lena Image (b) Histigram from original Lena image (c) LBGVQ reconstructed image of Lena (M=256, PSNR=32.3532) (d) Histogram from LBGVQ reconstructed image, (e) FPSOVQ re-constructed image of Lena (M=256, PSNR=36.3078), (f) Histogram from FPSOVQ reconstructed image, (g) FSAPSOVQ reconstructed image of Lena (M=256, PSNR=43.0436), (h) Histogram from FSAPSOVQ reconstructed image. Standard test image Lena is regarded as the original testing patterns to verify that the FSAPSO learning algorithm has better global and robust performances than FPSO and LBG learning scheme. The difference in performance comparison between LBG, FPSO and FSAPSO will gradually become large when we increase the size of the codebook. The SAPSO learning scheme can avoid the local minima of the PSO and keep the higher PSNR value even if it meets the different sizes of the codebook. The three generated results by means of running FSAPSO learning methods almost have similar good curves, it describes the ability of FSAPSOVQ learning scheme, that has powerful and adaptive ability, to deal with the initial problem in the design of complex codebooks where LBG fails and also having ability to jump from the local minima where FPSOVQ learning scheme stuck up which increase the performance of FSAPSOVQ learning scheme drastically. In real word image application, the processing information is usually vague, variant and so on. The FSAPSOVQ learning scheme is self adaptive enough to suffer such vague and variant environment and design good codebooks to reconstruct the compressed image with high fidelity. Thus, the evolution FSAPSO learning scheme find outs optimal parameters of fuzzy inference system, and then are generated appropriate codebooks with the decision of the soft fuzzy inference analysis to achieve the application of image compression. Based on the self adaptive particle swarm optimization learning scheme and flexible membership function of fuzzy inference system, the simulation demonstrated advancement of the FSAPSOVQ-based image compression system.
Auxiliary controlled SVS for Damping SSR in a Series Compensated Power System Dr. Narendra Kumar, Professor & Head (EE) The work presented in this article has been performed under the AICTE R&D Project, â€œEnhancing the power system performance using FACTS devicesâ€?. in the FACTS Research Laboratory, Delhi Technological University, Bawana Road, Delhi -110042.
In recent years static VAR system (SVS) has been employed to an increasing extent in the modern power systems  due to its capability to work as Var generation and absorption systems. Besides, voltage control and improvement of transmission capability, SVS in coordination with auxiliary controllers  can be used for damping of power system oscillations. Damping of power system oscillation plays an important role not only in increasing the power transmission capability but also for stabilization of power system conditions after critical faults, particularly in weakly coupled networks. The proposed SVS control strategy utilizes the effectiveness of combined reactive power and voltage angle (CRPVA) SVS auxiliary control signals. It is found that the torsional oscillations are effectively damped out and the transient performance of the series compensated power system is greatly enhanced. The beauty of the proposed scheme is that it derives its control signals from the point of location of the SVS. The scheme is easily implemental. SVS is considered located at the middle of transmission line due to its optimum performance. II.
The study system (Fig.1) consists of a steam turbine driven synchronous generator supplying bulk power to an infinite bus over a long transmission line. An SVS of switched capacitor and thyristor controlled reactor type is considered located at the middle of the transmission line which provides continuously controllable reactive power at its terminals in response to bus voltage and combined reactive power and voltage angle (CRPVA) SVS auxiliary control signals. The series compensation is applied at the sending end side of SVS along the line.
Fig. 1. Study system III.
A Case Study
The system considered for analysis is similar to IEEE first benchmark model. It consists of two synchronous generators each of 555 MVA, which are represented by a single equivalent unit of 1110 MVA, at 22 kV. The electrical power is supplied to an infinite bus over 400kV, 600 km long transmission line. The SVS rating for the line has been chosen to be 100 MVAR inductive to 300 MVAR capacitive by performing the load flow study. About 40% series compensation is used at the sending end of the transmission line.
A. Dynamic Performance The eigenvalues have been computed for the system without and with CRPVA SVS damping scheme for a wide range of power transfer. Table I presents the parameters of the SVS reactive power and voltage angle auxiliary controllers. Performing an exhaustive root locus study has optimally chosen these parameters. Table II presents the eigenvalues for the system at generator power PG=200, 500, 800 MW without and with the proposed scheme. When the damping scheme is not applied, one mechanical mode having frequency 4.9157rad/ sec is found to be unstable at PG = 800 MW. However, at PG = 500 and 200 MW, this mode is stable. When the CRPVA SVS damping scheme is applied, the unstable mechanical mode (4.9157rad/sec) at 800MW is effectively stabilized. At 500 and 200MW also the damping is considerably enhanced. As the system is already stable at 500 and 200MW, the eigenvalues with the proposed scheme are not presented for the sake of brevity in the table. TABLE 1 AUXILIARY CONTROLLER PARAMETER
SVS auxiliary signals Voltage Angle
Auxiliary controlled SVS for Damping SSR in a Series Compensated Power System Dr. Narendra Kumar, Professor & Head (EE) TABLE 2 SYSTEM EIGENVALUES WITHOUT AND WITH CRPVA SVS DAMPING SCHEME
controller. It is seen that the oscillations are sustained and growing and the system is unstable. Fig. 9 shows the dynamic response curves when the proposed CRPVA SVS auxiliary controller is applied. It is seen that the Torsional oscillations due to Subsynchronous resonance (SSR) are effectively damped out and the system becomes stable.
(Without CRPVA) (With CRPVA SVS Scheme) 200 MW 500MW 800MW 800MW _______________________________________________ 0± j298.1 0± j298.1 0± j298.1 0± j298.1 .0436±202.73 .058± j202.73 .0879±j202.72 -.0247±j202.83 .0136±j160.55 .007± j160.54 -.0047 ± j160.52 -.0026±j160.47 -.0007±j126.97 -.001± j126.97 -.0027 ± j126.96 -.007±j126.96 -.016±j98.86 -.0064± j98.83 .0042±j98.74 -.045±j98.65 -.4096±j4.47 -.1644 ± j4.93 .178±j4.97 -2.47±j6.84 -.9822+j.82 -.8177+j.85 -.885+j.91 -.5768+j.83 -37.25 -37.72 -38.61 -32.06 -28.2 -32.28 -33.23 -6.182 -2.42 -2.83 -3.04 -2.948 -25.7248±j23.62 -25.65±j23.91 -25.73±j24.1 -25.63±j23.76 -.9822-j.82 -.8177-j.85 -.8857-j.91 -.5768-j.83 -3.4341±j3507.33 -3.43± j3507.6 -3.27±j3499.08 -3.27±j3499.0 -3.4345± j2879.33 -3.44± j2879.6 -3.27±j2871.08 -3.27±j2871.08 -13.35± j2523.96 -13.35± j2524.62 -13.22±j2495.34 -13.24±j2495.36 -14.92± j1895.97 -14.92± j1896.62 -14.92±j1867.35 -14.89±j1867.31 -11.742± j1314.0 -11.99± j1307.79 -12.69±j1137.9 -15.459±jj1139.58 -15.209± j686.93 -15.58± j680.64 -18.90±j510.1 -6.451±j511.12 -12.46 ± j446.83 -12.64± j445.78 -12.92±j443.94 -13.17±j445.82 -7.22± j311.7521 -6.32± j311.66 -5.0831±j311.45 -4.01±j308.9 -9.295± j186.64 -9.87±j188.09 -10.49±j190.84 -9.79±j207.09 -545.89± j81.51 -545.23±j81.57 -49.90±j74.74 -546.74±j82.75 -55.8214± j71.20 -53.23±j69.84 -545.29±j74.41 -7.477±j32.34
B. Transient Performance A digital time domain simulation of the system under large disturbance conditions has been carried out on the basis of nonlinear differential equations with all nonlinearties and limits considered. The load flow study is carried out for calculating the operating point. The fourth order Runge-Kutta method has been used for solving the system non-linear differential equations. The natural damping of the system has been considered to be zero so that the effect of controlling-scheme can be examined exclusively. Disturbance is simulated by 30% sudden increase in input torque for 0.1 sec. The Fig. 8 shows the transient responses of the system without any auxiliary
Auxiliary controlled SVS for Damping SSR in a Series Compensated Power System Dr. Narendra Kumar, Professor & Head (EE)
Fig . 8 Response curves without any auxiliary controller Fig (9) Response curves with (CRPVA) SVS auxiliary controller
The article presents a new SVS control strategy for damping torsional oscillations due to subsynchronous resonance (SSR) in a series compensated power system. The proposed SVS control strategy utilizes the effectiveness of combined reactive power and voltage angle (CRPVA) SVS auxiliary control signals. A digital computer simulation study, using a nonlinear system model, has been carried out to illustrate the performance of the proposed SVS controller under large disturbance. It is found that the torsional oscillations are effectively damped out and the transient performance of the series compensated power system is greatly improved.
DIGITAL IMAGE WATERMARKING Jeebananda Panda, Assistant Proffesor, ECE Dept.
INTRODUCTION Development of internet is the root cause behind distributing digital media on the network recently. It leads to an acute need for media authentication because such digital content can be easily edited or modified by certain software or tools. As a new solution for content authentication, digital watermarking, is drawing considerable attention and becomes an active research field. A watermarking algorithm consists of the watermark structure, an embedding algorithm, and an extraction, or detection, algorithm. Watermarks can be embedded in the pixel domain or a transform domain. Digital watermark can be classified as robust watermark, fragile watermark and semi-fragile watermark. The property of robust watermark is to survive when the watermarked digital content is severely attacked and thus is applied in copyright protection. On the other hand, the fragile watermark will be destroyed even at slight change in the marked digital media. Therefore, image authentication becomes an important application of it. As a trade-off of robustness and fragility, semi-fragile watermark that can resist “content preserving” operations (JPEG compression) and be sensitive to “content altering” transforms, is more practicable than fragile watermark in image authentication. In the classification of watermarking schemes, an important criterion is the type of information needed by the detector: • Non-blind schemes require both the original image and the secret key(s) for watermark embedding. • Semi-blind schemes require the secret key(s) and the watermark bit sequence. • Blind schemes require only the secret key(s). In all transform domain watermarking schemes, there is a conflict between robustness and transparency. If the watermark is embedded in perceptually most significant components, the scheme would be robust to attacks but the watermark may not be meeting imperceptibility criterion. On the other hand, if the watermark is embedded in perceptually insignificant components, it would be easier to hide the watermark but the scheme may be less resilient to attacks. It is a technique which allows an individual to add hidden copyright notices or other verification messages to digital media. Message is a group of bits describing information pertaining to the signal or its author .Embedding of unobtrusive (unnoticeable) marks or labels that can be represented in bits in digital content. Embedded marks are generally invisible (or imperceptible) but can be detected or extracted for verification purposes.
Need of Watermarking The inherent flexibility of Internet facilitates users to transact with one another to create, distribute, store, peruse, subscribe, enhance, modify and trade digital content in various forms like text documents, databases, e-books, still images, audio, video, computer software and games. The use of an open medium like Internet gives rise to concerns about protection and enforcement of intellectual property rights (IPR) of the digital content involved in the transaction. In addition, unauthorized replication and manipulation of digital content is relatively trivial and can be done using inexpensive tools. Types of watermarks Visible
Signal changed completely Watermarked signal different from original
DIGITAL IMAGE WATERMARKING Jeebananda Panda, Assistant Proffesor, ECE Dept.
◦ Signal not changed to a large extent ◦ Minor variations to output signal Watermark Categories Robust Watermark It sticks to document(Image, Video, Audio or Text) to which it is embedded. Removing it destroys quality of signal. It is used for copyright protection Fragile Watermark It breaks very easily on modifying host signal. It is used for tamper detection, finger printing and digital signature Semi Fragile Watermark It is Sensitive to signal modification and Gives information about nature and location of attack Also, it Provides data authentication and is Used for Medical and media reports Type of digital data used for watermarking Image Watermarking Video Watermarking Audio Watermarking Text Watermarking (word shifting, line-shifting & text feature ) A Digital Watermarking System
Unlike encryption, which is useful for transmission but does not provide a way to examine the original data in its protected form, the watermark remains in the content in its original form and does not prevent a user from listening to, viewing, examining, or manipulating the content. Attributes of Watermarks Imperceptibility ◦ the watermarked data resembles the original Robustness ◦ watermark should survive any reasonable processing inflicted on the source
Capacity ◦ Maximize data embedding payload Security ◦ The watermarked image should not reveal any clues of the watermark in it Types of Watermarking algorithms Non Blind Semi Blind Blind Applications of watermarking Authentication copyright protection Fingerprinting IP protection of sequential circuits Joint/multiple creatorship verification Anti counterfeit of commercial bills Watermarking Techniques Spatial Domain Watermarking ◦ Watermark embedded by modifying pixel values ◦ Spread spectrum approach Transform Domain Watermarking ◦ Watermark embedded in transform domain such as DCT,DFT or wavelet ◦ Coefficients of global or block transform is modified TECHNIQUES OF EMBEDDING DWM Block Based Chirp Watermarking Circular Chirp Watermarking: Inter Block Correlation Technique Using Error Correcting Code: Statistical Approach Self-watermarking Technique: Based On Integer Wavelet Transform (IWT): Based On Complex Wavelet Transform (CWT) State of research work in the field of watermarking Many research has been carried out in the field of watermarking in Spatial domain, in DCT, DWT,IWT domain for improved attributes and quality Many new techniques have been proposed for encryption of watermark by using an image in real form or binary form or pseudo-random noise Multilevel watermarking and multiple watermarking Techniques using energy efficient/power spectrum compliant watermark.
Ir SENSITITVITY ANALYSIS OF TRANSMISSION LINES USING MATLAB AND SIMULINK MODEL Rohit Singh,Vice-Chairman IEEE-DTU (2009-10)
INTRODUCTION The loading on transmission lines limit and impede power transfers and cause congestion, greatly reducing the effectiveness of systems and increasing the cost of power transmissions. Through several methods, congestion can be effectively eliminated either by building a new transmission line or by increasing the capacity of the original line between congested zones. Both methods cause line parameters to change and line capacity increases between the nodes. Therefore, the sensitivity of a system to variations of its parameters becomes important in terms of operation and planning. The requirement to define a mathematical model with constant parameters for a transmission line requires the assignment of average values to conductor height and conductor spacing. It is a known fact however that the height and spacing will vary considerably along the line due to sag. While describing the transmission system certain parameters are known with good accuracy and may be taken as constant whereas others are affected by errors of calculations and may be variable along the line. For a transmission line with known sending end voltage and power, it is of considerable interest to evaluate the sensitivity of receiving end voltage and current with respect to these variable parameters. Specifically, pi equivalent lumped parameter and distributed parameter transmission line models with 1% parametric variation is used. Results are developed using two softwares, namely MATLAB and SIMULINK. SENSITIVITY ANALYSIS For economic reasons, electric power is generated in bulk near coal pits and transmitted over long distance EHV transmission lines. The sending end voltage of a transmission line is kept constant at a specified value by means of automatic voltage regulator and regulating transformers. Facility of voltage control at the sending end helps in controlling the voltage at receiving end. A large number of induction motors and synchronous motors which are designed to operate at a fixed rated voltage constitute the major component of the load at the receiving end. The characteristics of these motors may vary considerably if the voltage at their terminals deviates
from the desired value. Inefficient and uneconomical operation of the motors would result if the voltage at the receiving end of the transmission line is not regulated properly. Since, the receiving end voltage and power are functions of the line parameters, terminal load and system frequency, it is of considerable interest to evaluate the sensitivity of the receiving end voltage and power with respect to the various parameters such as spacing between the conductors, height of conductors above the earth plane, radius of the conductor, resistivity of the conductor material, system frequency and line length, for a given sending end voltage and power. A simplified model of transmission line is used for calculating the sensitivities of the receiving end voltage and power. The sensitivity analysis theory is then extended to a multiconductor transmission line. The starting point in both the cases is the formation of the sensitivity functions of the series-impedance and shunt-admittances per unit length of the line with respect to the various parameters of interest, with the difference that in single phase transmission line sensitivity functions of scalar quantities are formed, whereas in multiconductor case, sensitivity functions of matrix quantities are formed. A single-phase transmission line can be modeled by the exact long-line equations: V1 = Vocosh(γ1) – ZoIosinh(γ1) I 1 = Iocosh(γ1) – Vo/Zosinh(γ1) Where Vo, V1 and Io , I1 are respectively the sendingend and receiving-end voltages and currents. Zo and γ are respectively the surge impedance and propagation constant of the line. SENSITIVITY ANALYSIS OF MULTICONDUCTOR LINE USING SIMULINK MODEL For the sensitivity analysis of multiconductor line using with Simulink, the model was developed for a 300 km long line. The line was taken to consist of a total of 10 pi sections, each of 30 km in length. The line parameters, viz. positive and zero sequence resistance, positive and zero sequence inductance, and positive and zero sequence capacitance per unit length were entered into a dialogue box and the line was simulated for a sending end rms
Ir SENSITITVITY ANALYSIS OF TRANSMISSION LINES USING MATLAB AND SIMULINK MODEL Rohit Singh,Vice-Chairman IEEE-DTU (2009-10)
voltage of 220 kV. The voltage at the receiving end was obtained from the simulation results. Thus the sensitivity analysis was done by first obtaining the values of the line parameters for the original case and then calculating the changed parameters for a 1% change in radius of the conductor, height above the earth surface, and spacing between conductors respectively. Then the simulation was run again with the changed parameters and voltage at the receiving end was obtained. The results by using Simulink for the sensitivity analysis of Receiving End Voltage are: CASE (in m)
Inductance (Henry/ km)*10-3
Capacitance (Farad/km)* 10-9
Receiving -end voltage* 100 kV
Sensitivity of receiving end V
0.99r = 012573
1.01r = 0.012827
1.10D = 6.15696
0.99D = 6.03504
1.01H = 7.6972
0.99H = 7.5438
3. Normalized Sensitivity w.r.t. Height above earth = 0.009394575229204% The results by using Matlab for the sensitivity analysis of Receiving End Current are: 1. Normalized Sensitivity w.r.t. radius of conductor = 0.006854602191405% 2. Normalized Sensitivity w.r.t. spacing = -0.006935318754632% 3. Normalized Sensitivity w.r.t. Height above earth = 0.000080716563226% COMPARISON AND RESULTS The results obtained from Matlab and Simulink are close in terms of numerical values. The sensitivity model discussed above can be used to obtain a greater insight into the electrical design aspect of overhead lines. In addition, the effects of errors of parameter evaluation on the operating conditions of the transmission lines can also be ascertained. Thus we can conclude that: Received end voltage is least sensitive to height above the earth surface. The regulation of transmission improves with increase in spacing and height. The efficiency of transmission decreases with an increase in the radius of the conductor.
Faculty Advisor: Rachna Garg
**ORIGINAL denotes the original conditions in the line, viz. r = 0.0127m, D = 6.96m, and H = 7.62m By using these values the normalized sensitivity with respect to these parameters could be easily calculated. It was observed that there was virtually no change in voltage at receiving end when a variation of 1% in the parameters was done. The change was too small to be depicted. The results by using Matlab for the sensitivity analysis of Receiving End Voltage are: 1. Normalized Sensitivity w.r.t. radius of the conductor = -0.028024914233845% 2. Normalized Sensitivity w.r.t. Spacing = 0.018630339004641%
MINI BAJA Ashish Sharma, Ayush Goyal, Aditya Krishna, Ankit Goila, Aadityeshwar Singhdeo, Rishabh Bhargava ABSTRACT The objectives of the mini-baja competition are to design and manufacture a “fun-to-drive”,versatile,safe,durable,and high performance off road vehicle. This vehicle must be capable of negotiating the most extreme terrain with confidence and ease.
The new frame design began by using our previous vehicle frames as references and the restrictions in the rule book. A 3-D frame was created using Solidworks modeling software. This technique allowed fast design modifications to be made as required.A space frame had been used and was constructed of alloy steel seamless tubing with plain end to create a strong, light, and durable frame. Also the material should be within the rules guidelines. SUSPENSION & VEHICLE DYNAMICS A Mini-Baja suspension system must satisfy the following design requirements: • Provide sufficient sprung mass vibration isolation to maintain satisfactory ride quality, while maintaining high tire-ground contact rate and low tire vertical load fluctuation rate to improve road holding and handling • Limit chassis roll during cornering to prevent roll-over, decrease roll camber, and therefore, decrease steering reaction time • Prevent excessively high jacking forces by managing static roll center location and roll center migration • Limit lateral tire scrub to maintain straight line stability and minimize horsepower losses at the rear suspension • Control lateral load transfer distribution to influence both steady state and limit of adhesion oversteer/understeer handling characteristics DESIGN CONSIDERATIONS (FRONT AND REAR)
FRONT SUSPENSION DESIGN Independent double wishbone suspension linkage configuration was used at the front end of the vehicle. A higher kingpin angle results in lower scrub and higher steering returnability at low speeds. But a higher kingpin also increases the wheel lift which tends to destabilize the car. Hence a moderate kingpin angle of 7.6 degrees was chosen and the scrub radius so obtained was accepted. Due to inboard disc brakes, the rear scrub radius is minimized to zero which considerably improves straight line stability and also minimizes power losses. A front toe-out of 1 degree was chosen as a trade-off between straight line instability and quicker steering response. The loss in straight line stability due to toe out was partially compensated by providing a negative caster of 7.5 degrees n the front. Suspension should provide a slight camber angle in the direction of rotation for optimized performance. Hence a negative camber of 3 degrees was set in the rear. SUSPENSION COMPONENT ANALYSIS SLA wishbones were used to design the 2010 vehicle. 1”OD and 14 gauge MS 1018 pipes were used for the UCA whereas 1”OD and 12 gauge pipes were used to design the LCA of the suspension. Gusset plates are used to reduce bending stress and increase torsional rigidity. The front suspension UCA inboard mounted rod end joints were replaced with a single, bushed pivot joint. Two inboard mounted pivot joints were used to mount each LCA to the chassis. Threaded ball joints were used at the uprights to facilitate static camber adjustment and increase strength. Also, Greater the effective distance between transverse links, smaller will be the forces in the suspension control arms and their mountings. As the body tilts it produces a change in its ground height between inside and outside wheels. By careful design the suspension geometry was made to alter the tracking direction to provide oversteering effect. This system is not adopted on the front suspension to avoid the interference with steering geometry. The rear suspension is designed based on the semi trailing arm to provide anti-dive and squat features. This is obtained by converging the swing axis of the double wishbone suspension at some point along the wheel base. The front uprights were redesigned to incorporate the changes in suspension geometry namely kingpin and caster angle,
MINI BAJA Ashish Sharma, Ayush Goyal, Aditya Krishna, Ankit Goila, Aadityeshwar Singhdeo, Rishabh Bhargava which also resulted in 7 % weight reduction . The new upright, made of 6061 Al alloy has improved strength, serviceability; factor of safety being above 2.5 in all cases.
STEERING SYSTEM It was decided early that the steering system needed to be the best but also small. With these restrictions in mind we decided on a rack and pinion style steering system. The rack and pinion style has many benefits. The overall steering ratio is the ratio of the steering wheel angle to the average tire angle. An overall steering ratio of approximately 4:1 might be desired which would provide a 45° steer angle of the tires with a 180° input to the steering wheel, eliminating the need for hand-over-hand maneuvers. We have decided to set a steering ratio or 5:1 which would provide a 36° steer angle of the tires with a 180° input to the steering wheel. This allows us to increase the steering arm length to decrease the effort. BRAKES DESIGN CONSIDERATIONS The first option the team explored was the use of drum brakes. Through research of previous braking designs, it was determined the use of a braking cable arrangement is not feasible. The other consideration was the disc braking system. Braking with this system can be obtained both mechanically and hydraulically. However, the same problems of the drum brake occur with a mechanical disc brake system. A hydraulic disc braking system uses fluid displacement to engage the brake calipers on the rotor. This is a more ideal system because there is no need for a mechanical system. Therefore, available area along the vehicle’s chassis is not required. ANALYSIS The calipers are powered by dual master cylinders. Both master cylinders are standard 3/4 in. bore direct mounted Master Cylinders with 4 oz. reservoirs. Two master cylinders are used to increase safety by incorporating dual redundancy as well as for SAE rules compliance Braking Analysis Parameters Parameter Value Front disk O.D.
Height of C.G.
Coefficient of friction
Front weight bias percentage
Front master cylinder bore
The vehicle was required to stop within a span of 5m moving with initial speed of 48 kmph. Ui=42 kmph or 11.5 m/s, Vi=0 m/s, S=5m,a=-1.34 g After calculations it was found out that to have deceleration Torque to be generated by front brakes =384.806 N/m Torque to be generated by rear brakes = 95.522 N/m Force to be generated by front caliper = 3498.24 N Force to be generated by rear caliper = 796.26 N DRIVETRAIN OBJECTIVES High speed is desired for the acceleration and speed trials while high torque is preferred for towing and hill climbing events. These characteristics are achieved without continuous shifting by coupling a continuously variable transmission (CVT) to the engine. Unlike last year’s manual transmission , the CVT allows any driver, of any skill level, to focus on the obstacles ahead without concentrating on proper gear selection. The CVT eliminates the burden of a manual clutch and is far more simplistic than an automatic transmission. DESIGN The drive train design consists of a 10 HP engine, CVT and a 2 step chain sprocket system providing the final gear reduction. The Comet 790 Series CVT was selected because of its large range of gear ratios.
As the engine reaches its governed rpm limit, 3600 rpm, the gear reduction across the CVT has been determined to be 0.54:1 thus serving as an “overdrive” for the car. At low engine speeds the CVT produces a reduction of 3.38:1 providing necessary torque. CONCLUSION The DTU Mini Baja Team 2010 designed its vehicle keeping 6 categories in mind with respect to the occasional weekend off road enthusiast: safety, performance, durability, comfort, cost and serviceability. Since the design process is never ending, the design and modification will continue well beyond the competition. Experience gained from the competition and testing will highlight areas that require design improvements.
Digital Inpainting Himadri Kakkar, 3rd yr COE Inpainting is the technique of modifying an image in an undetectable form. The object of inpainting is to reconstitute the missing or damaged portions of the painting, in order to make it more legible and to restore its unity. Digital inpainting replicates the basic techniques of professional artist for the restoration of paintings. The structure of the area surrounding inpainting region is continued into the gap, contour lines are drawn via the prolongation of those arriving at boundary of the inpainting region .The different regions inside inpainting domain, as defined by the contour lines, are filled with color, matching those of boundary and the small details are painted. The algorithm for the digital inpainting consists of the above mentioned steps. After the region to be inpainted is selected, it is filled automatically with the surrounding information. Digital techniques are starting to be a widespread way of performing inpainting, ranging from attempts to fully automatic detection and removal of scratches in film, all the way to software tools that allow a sophisticated process.
Digital Inpainting Algorithm Algorithm The Input to the algorithm is the image to be restored and the mask that delimits the portion to be inpainted.The mask contains the region to be inpainted separated from other region. The whole original image undergoes anisotropic diffusion smoothing. The purpose of this is to minimize the influence of noise on the estimation of the direction of the isophotes(lines of constant intensity orcountour lines) arriving at boundary region. Anisotropic diffusion is the diffusion of intensity differently in different directions. Perona and Malik formulate the anisotropic diffusion filter as a diffusion process that encourages intraregion smoothing while inhibiting interregion smoothing.
iterations, anisotropic diffusion is applied. After every 15 steps of transportation, 2 steps of diffusion are applied. In total 3000
iterations are done. The total number of iterations depend on the size of the inpainting region. Color images are considered as a set of three images(one for each colour in R,G,B), and the above described technique is applied independently to each one. To avoid the appearance of spurious colors after composition of three images as RGB model is not addtive , we use a color model, with one intensity and two color components such as HSI or HSV.Both the color models are used RGB (Red ,Green, Blue )and HSI (Hue, Saturation, Intensity).The major limitation of the technique is the reproduction of large textured regions. This algorithm does not give satisfactory results when applied on noisy images. There are even better algorithms which take into account the noise present in the images. Applications of this technique include the restoration of old photographs and damaged film; removal of superimposed text like dates, subtitles, or publicity; and the removal of entire objects from the image (e.g., removal of stamped date and redeye from photographs). HSI Color Model For example:
Now the image enters the inpainting loop, where only the values inside inpainting domain are modified. These values change according to the discrete implementation of the inpainting procedure i.e. the transportation partial difference equation (discrete form)
Here, u denotes the image to be inpainted; t denote the iteration step. r=t/T, where T=total no of steps of iteration and s=1 here Neumann boundary conditions are applied on the boundary as on the boundary the above equation becomes a third order differential equation and its discrete implementation is different. And the rest of the inpainting domain is filled by applying the above relation iteratively thus shrinking the inpainting region progressively inwards. After every few
MICROBIAL FUEL CELL Abhijeet Singh, Gaurav Sinsinbar, Mahima Agarwal, Tanvi Agrawal The present dependence on fossil fuels for energy and their fast depleting reserves are forcing us to look towards alternate and environment friendly sources to satisfy our ever increasing energy needs. One such technology still in its infancy is the Bio fuel cell. BFCs are devices capable of directly transforming chemical to electrical energy via electrochemical reactions involving biochemical pathways. BFC utilize biological moieties such as enzymes, living cells etc. to directly generate power from the chemically energy contained within various organic and inorganic species. The basic layout of BFC is
The electrodes are separated by a semi permeable membrane and are suspended into solution. A biological species (microorganism or enzymes) can be in the solution (or as a suspension) within the anodic compartment of the cell or alternatively be immobilized at the electrode. BFCs can be subdivided into Microbial fuel cell(MFC) Enzymatic Fuel cell MFC technology represents a novel approach of using bacteria for generation of bioelectricity by oxidation of organic waste and renewable biomass. In MFCs microorganisms such as bacteria, which act as biocatalysts, are used to carry out various electrochemical reactions. They gain energy by transferring electrons from an electron donor (glucose, organic matter etc.) to an electron acceptor or an oxidizing agent (oxygen etc). The electrons produced during the electrochemical reaction are not directly transferred to electron acceptor but are diverted to anode and then the electrons are directed to cathode across an external circuit and for every electron conducted, a proton is transported across the membrane to the cathode for completing the reaction and sustaining the electric current. The movement of proton maintains the pH of the anodic compartment and provides an environment favorable for the microorganisms. In our experiments on the microbial fuel cell, dual chamber cell was studied with Potassium ferricyanide as catholyte maintained at 7.5pH. The source of microorganisms was wastewater from Rithala sewage treatment plant and the DCE lake. We were thus also able to study the effect of
the reactions on the quality of water. Instead of using the conventional Nafion membranes for ion exchange we used a salt bridge made of agar and compared our readings with those obtained using Nafion membranes. The use of agar as a cheap alternative to the extremely expensive Nafion membranes is also being studied. Effects of variations in the pH of the anode and composition of salt bridge and anolyte were studied. Mechanism of Operation Electrochemically active microorganisms are capable of extracellular electron transfer (EET) and this mechanism can be used to transfer electrons to an electrode (anode) while they are oxidizing (and thus catabolising) the organic materials in wastewaters. Bioelectrochemical wastewater treatment can be accomplished by electrically coupling a microbial bioanode to a counter electrode (cathode) that performs a reduction reaction. As a result, the electrode reactions can occur and the electrons can flow from anode to cathode (i.e. electrical current can flow). In a typical process of aerobic respiration, the oxidation of an inorganic compound such as glucose would result in the following half reaction: C6H12O6 + 6H2O à 6CO2 + 24 H+ + 24e- + 4ATP This represents the summary reaction of the biochemical pathways of glycolysis and the TCA cycle. The half reaction for the reduction of oxygen by aerobic microbes through the electron transport system is given as: 24 H+ + 24e- + 6O2 à 12H2O + 34ATP The proton motive force that develops through the transfer of electrons in the electron transport system is used by the cell to generate ATP. This step produces the bulk of the ATP formed from aerobic oxidation of an organic compound. The overall equation for the aerobic oxidation of an organic compound, therefore is: 6C6H12O6 + 6O2 à 6CO2 + 6H2O + 38ATP In a microbial fuel cell, these processes are physically separated. The growth of the organism occurs in the anode while the oxygen is terminal electron acceptor at the cathode. The protons produced by the oxidation of the substrate pass through the exchange membrane to the cathode chamber. Description of the Setup
Fig: Representation of setup
MICROBIAL FUEL CELL Abhijeet Singh, Gaurav Sinsinbar, Mahima Agarwal, Tanvi Agrawal
The dual chambered MFC was operated in the lab using cylindrical plastic boxes (volume: 1L individually) for both the anodic and cathodic chamber. The chambers were connected with the help of a salt bridge, plastic tube which was filled with 5% agar solution. This agar column served as the passage for transfer of protons from anodic to cathodic chamber, instead of a proton exchange membrane. Anode compartment was filled with waste water, rich in organic components whose oxidation releases electrons and protons used for electricity generation. Waste water used for the experiment was obtained from Rithala waste water treatment plant. Cathodic compartment was filled with a solution of potassium ferricyanide (50Mm) and phosphate buffer. Potassium ferricyanide generates potential difference between the anodic and cathodic chamber and consumes electrons from the external circuit and protons through the agar column. Phosphate buffer was used to maintain a pH of 7.5 in the cathodic compartment. Plain graphite plates (8cm x 5cm x 0.5cm) without any coating were used as electrodes for both anode and cathode.
Fig.: Actual setup in the laboratory Results The setup where anodic chamber was maintained at acidic pH showed high, nearly constant voltage readings for the longest time indicating the presence of some electrochemically active bacteria which were activated in the acidic pH, and helped in maintaining a high potential difference. Use of glucose in one of the setups gave no clear indication of why voltage fluctuates so much, it might be because of erratic consumption of glucose by the microbes, yielding crests and troughs in the voltage graph. Using KCl in the salt bridge gave high, nearly constant values for both current and voltage as compared to a salt bridge without KCl pointing towards an increase in conductivity due to presence of potassium and chlorine ions in the salt bridge. Increase in electrode surface area and use of activated sludge showed an increase in voltage and current readings. Changes like addition of a salt e.g. KCl in the salt bridge enhanced our current and voltage readings. Change in the source of wastewater brought about considerable change in readings due to change in organic content of wastewater.
Further strategy We are now focusing on identifying specific bacteria in the wastewater which when used individually in anode compartment containing suitable substrate gives maximum efficiency. Also we are trying to study the benefits of using agar as cheaper substitute in the lab to Nafion for studying the working of the microbial fuel cell and comparing efficiencies. Advantages of MFCs are 1) MFCs use organic waste matter as fuels and readily available microbes as catalysts for the biological breakdown of organic waste matter. 2) MFCs have high conversion efficiency as compared to Enzymatic Fuel Cells, in harvesting up to 90% of the electrons from the bacterial electron transport system. Disadvantage â€˘ Because of the limitations of a biological system the output obtained in terms of voltage and current generated has an upper limit. Applications of Microbial fuel cell 1. Wastewater treatment: The degradation of organic material in wastewater by the microbes for energy generation helps improve the quality of water. 2. Generation of energy and fuel transport vehicle: These cells can be run for more than 30 days and have great potential as a source of energy. 3. Invivo application and implantable power sources: Biosensors are being developed. Glucose sensors in pacemaker, for diabetics and small valves for bladder control are being studied as breakthrough technologies all over the globe. 4. Powering underwater monitoring devices: MFCs can be used to power devices, particularly in river and sea bed and deep-water environments where it is difficult to routinely access the system to replace batteries. Conclusion The biofuel cell has immense potential to develop into the technology of the future, especially in times when alternate sources of energy are being promoted. However, a large amount of work still needs to be done so as to develop these into commercially viable options. The large body of work that has been done up till now on the working of these cells focuses mainly on maximising efficiency and optimising conditions to obtain maximum output. This technology is still in its infancy and work is in progress to develop it to achieve its full potential. Faculty Advisor: Dr Navneeta Bharadwaj Special Thanks toProf. RC Sharma
WEB ACCESSIBILITY Nikhil Maheshwari, M.Tech(IS) 3rd Sem The Web Offers Unprecedented Opportunities: The internet is one of the best things that ever happened to people with disabilities. You may not have thought about it that way, but all you have to do is think back to the days before the internet to see why this is so. For example, before the internet, how did blind people read newspapers? They mostly didn’t. Audiotapes or Braille printouts were expensive – a Braille version of the Sunday New York Times would be too bulky to be practical. Websites are often designed and developed without careful consideration of the potential impacts on people with disabilities. This method works, but it makes blind people dependent upon others. Most newspapers now publish their content online in a format that has the potential to be read by “screen readers” used by the blind. These software programs read electronic text out loud so that blind people can use computers and access any text content through the computer. Suddenly, blind people don’t have to rely on other people to read the newspaper to them. They don’t have to wait for expensive audio tapes or expensive, bulky Braille printouts. They simply open a web browser and listen as their screen reader reads the newspaper to them, and they do it when they want to and as soon as the content is published. Falling Short of the Web’s Potential: Despite the web’s great potential for people with disabilities, this potential is still largely unrealized. For example, some sites can only be navigated using a mouse, and only a very small percentage of video or multimedia content has been captioned for the Deaf. What if the internet content is only accessible by using a mouse? What do people do if they can’t use a mouse? As soon as you start asking these types of questions, you begin to see that there are a few potential glitches in the accessibility of the internet to people with disabilities. The internet has the potential to revolutionize disability access to information, but if we’re not careful, we can place obstacles along the way that destroy that potential and which leave people with disabilities just as discouraged and dependent upon others as before. Various Disability Types and their handling methods: Disability Type Handling Methods Sound Support for every Visual : Low Vision, Mouse Movement, Input Blindness, Colour& Output, Keyboard Blindness Accessibility. Captions for videos and Hearing : Deafness complex content. Motor : Slow response time, Inability to use a Voice commands. mouse, limited fine motor control
Cognitive: Inability to remember or focus Easy Structure of site and on large amounts of site map. information, Learning disabilities, distractibility. W3C introduced WCAG: If you live in the United States, applicable laws include ADA, IDEA, and the Rehabilitation Act of 1973 (Sections 504 and Section 508). Many international laws also address accessibility. The Web Content Accessibility Guidelines provide an international set of guidelines. They are developed by the Worldwide Web Consortium (W3C), the governing body of the web. These guidelines are the basis of most web accessibility law in the world. Version 2.0 (currently in development) of these guidelines are based on four principles: Perceivable: Available to the senses (vision and hearing primarily) either through the browser or through assistive technologies (e.g. screen readers, screen enlargers, etc.) Operable: Users can interact with all controls and interactive elements using either the mouse, keyboard, or an assistive device. Understandable: Content is clear and limits confusion and ambiguity. Robust: A wide range of technologies (including old and new user agents and assistive technologies) can access the content. Our Contribution: In spite of the introduction of Web Content Accessibility Guidelines (WCAG), still the popularity of these guidelines among web developers is less. The reason for this is the complex nature of Web Content Accessibility Guidelines (WCAG), even WCAG 2.0 introduced recently are very difficult to follow. Our contribution is “Website Development Life Cycle based on Web Content Accessibility Guidelines (WCAG) 2.0.”; following such Guidelines Web Developers can develop Accessible Websites. Visit The Website for more Detail on this Project… www.dtumtechis.110mb.com www.dce.ac.in/IS Conclusion: The web offers so many opportunities to people with disabilities that are unavailable through any other medium. It offers independence and freedom. However, if a web site is not created with web accessibility in mind, it may exclude a segment of the population that stands to gain the most from the internet. Most people do not intend to exclude people with disabilities. As organizations and designers become aware of and implement accessibility, they will ensure that their content can be accessed by a broader population.
Managing Value chains: Beyond Supply Chain Shekhar Raiwani, 2nd year, DSM
Today’s competitive environment calls for a tool which will not only give an organization an edge over their competitors but also help it to achieve the established targets of increased market share and more brand value. Organizations are no more production oriented and Economies of scale have shifted to Time economies. With more and more orders in queue, big challenge lies for managers to deliver the product at right place and at right time. Concepts like JIT and Lean Manufacturing are being followed more and more by the managers but the challenge still is to have the further reduced cycle time, less inventories and a more proximity to JIT. A proper supply chain management is that single and most effective tool to accomplish this. Organizations world wide are trying their best to make their supply chain most effective and efficient. Dell, for example, has created a benchmark by a developing supply chain which was unique to their industry. High customer involvement resulting in customized computers and delivery at time is something Dell has achieved through its efficient supply chain. Owing to this, Dell had about 18 to 19% of the market share (in 2006) worldwide of personal computers. Toyota is another example of supply chain benchmarking. For decades, the no. 1 automaker has been a paragon of manufacturing and supply chain excellence. Manufacturers all over the world learnt from Toyota that Quality is cost saver rather than an added expanse for operations. Lean production techniques and inventory minimization has transformed the ideal of a factory.
• Customer value is linked to the use of a product or service, thereby removing it from personal “values”; • Customer value is perceived by the customers rather than objectively determined by the seller; and • Customer value typically involves a trade-off between what the customer receives (e.g. quality, benefits and worth) and what he or she gives up to acquire and use a product or service (e.g. price, sacrifices). The creation of value is done through the set of interrelated process which involves SCM, Value chain and customer chain. These are the series of processes starting from raw materials procurement, processing and finished goods delivery to the customer. All these processes need to be integrated to achieve the increased value for the customer. Organizations need to use the modern day tools such as IT, internet, ERP, speed and agility to beat the competition. They need to invest more and more in these so that they can get speedy and accurate flow of information across all the processes so that value all over the chain of processes is maximized to deliver maximum value product to the customer. Smooth flow of information will help in working closer with the customer, minimizing the inventory, on-time delivery and development of highly customized product at low prices. Thus Value chain management (VCM) is a much wider concept which includes the SCM. VCM is concerned with managing integrated information about product flow, all the way from suppliers to end-users. It is concerned primarily, with the customer from start to finish.
Hence the Supply Chain Management (SCM) can be thought of as the concept which can lead to operational and strategic benefits. SCM is concerned with smoothness, economically driven operations and maximizing value for the end customer through quality delivery. But the SCM have certain limitations which have given birth to some extended concepts. The limitations are however due to the fact that SCM as a concept does not extend far enough to capture customer’s (end user) future needs and how these get addressed and furthermore, it does not encompass the post-delivery, post-evaluation and relationship building aspects. The concept of Value is further extension of SCM which is more thought about by managers and executives these days and it provides an insight over the issues not addressed by SCM.
Benefits derived from the Value Chain Management The fruits of implementing VCM in an organization can be of different flavours. These can be operational, organizational, economical or strategic competitiveness. VCM gives an opportunity to the organization to have an insight and come out with competitive capabilities and weaknesses with respect to the particular market. It will help them to develop their Value Proposition and the core competencies and hence the increased market share. Further a proper integration of the processes into value chain exposes the various non-value adding processes which may be removed from the value chain, and hence the cost. The customer receives more value and processes are more synchronized to deliver the maximum value. Moreover it is about creating customer focus and through a continuous, uninterrupted relationship with the customers; information can flow both ways and, therefore, creating focus which is very necessary in a modern business environment that requires organizations to move speedily and to be flexible and agile.
Value Chain The concept of Value can be put under following interlinked points:
Managing Value chains: Beyond Supply Chain Shekhar Raiwani, 2nd year, DSM
Critical factors for VCM success To accrue the benefits through VCM many challenges are to be faced. These challenges or factors need to be addressed well if established goals through VCM are to be realized. According to survey done by Ernst & Young following are some of the factors that need to be taken care • Value Chain optimization: The diagram below shows the barriers in value chain optimization. It can easily be seen that technology is the least factor of concern and prices the most critical one.
VCM road map.
Value chain performance A lot of organizations are not able to improve their performance by application of VCM. A proper implementation of VCM with focus on performance of Value chain and hence organization is much emphasized.
Value chain strategy A systematic strategy for implementation of value chain keeping in mind the future requirements and objectives of the organization, is required. Many organizations are failing to develop the systematic value chain strategy and some others who are starting the process have not documented it well. Furthermore the effectiveness of Value Chain can’t be measured unless a proper value chain strategy is formed.
Value chain information sharing Proper information sharing with the suppliers as well as customers and vendors is anticipated. Information need not be limited only to the work they are assigned to or they need, but also the information related to strategy and goal of the organization needs to disseminate.
VCM road map VCM road map has four pillars and emphasis is placed on the speed and agility. Following figure shows the road map along with the four pillars of
VCM vision: Organizations will have to put in place a value creation, a mission/vision, based on extensive knowledge of the customer to gain excellence. IT infrastructure: for smooth and fast flow of accurate information. Partnership approach: Close partnership with suppliers, vendors and customer. Process Management philosophy: With implementation of modern tools such as Business process re-engineering, processes are analyzed, core activities are established and optimization is delivered across the board. Reference: International Journal of Production Economics, Volume 87, Issue 3, Supply Chain Management for the 21st Century Organizational Competitiveness.
50 Gbps Photonics Link Revolutionizing Computing
Nishant Aggarwal, 2nd year EP(Engineering Physics) Fifty One years ago, Robert Noyce and Jack Kilby invented the silicon Integrated Circuit (IC), never expecting that this invention would affect the lives of all kinds of people around the world. Likewise, it’s hard to imagine that Ted Maiman could have foreseen how his invention of the first laser out of a ruby crystal rod – just an year later – would revolutionize industries ranging from medicine to communications. Now in 2010, these two inventions are coming together with silicon photonics. We’ve been talking for many years about research to “siliconize” photonics, and until now all these breakthroughs have been at the device or component level. This is for the first time that a silicon photonics transmitter has been integrated using hybrid silicon lasers that’s capable of sending data at 50 gigabits per second (Gbps) across an optical fiber to an integrated silicon photonics receiver chip which converts the optical data back into electrical. Recently at Intel Corp., there has been an important advancement in the quest to use light beams instead of electrons as data carriers in computers: development of a research prototype representing the first silicon-based optical data connection with integrated lasers. The link can move data over longer distances and many times faster than today’s copper technology; up to 50 Gb of data per second.
An engineer holds a 50 Gb/s silicon photonics transmit module. Laser light from the silicon chip at the center of the green board travels to the receiver module in the upper right, where a second silicon chip detects the data on the laser and coverts it back into an electrical signal. Today computer components are connected to each other using copper cables or traces on circuit boards. Due to the signal degradation that comes with using metals such as copper to transmit data, these cables have a limited maximum length. This also limits the design of computers, forcing processors, memory and other components to be placed just inches from each other. This research achievement is another step toward replacing these connections with extremely thin and light optical fibers that can transfer much more data over far longer distances, radically changing the way computers of the future are designed and altering the way the datacenter of tomorrow is architected.
Silicon photonics will have applications across the computing industry. For example, tomorrow’s datacenter or supercomputer may see components spread throughout a building or even an entire campus, communicating with each other at high speed, as opposed to being confined by heavy copper cables with limited capacity and reach. This will allow datacenter users, such as a search engine company, cloud computing provider or financial datacenter, to increase performance, capabilities and save significant costs in space and energy.
Dr. Mario Paniccia, Intel Fellow and director of Photonics Research at Intel Labs, holds the thin optical fiber used to carry data from one end of the 50-G silicon photonics link to the other. The silicon transmitter chip uses integrated hybrid silicon lasers along with other silicon photonic devices to send up to 50 Gb of data each second. The 50 Gb/s silicon photonics link prototype is the result of a multiyear silicon photonics research agenda, which included numerous “world firsts.” It is composed of a silicon transmitter and a receiver chip, each integrating all the necessary building blocks from previous Intel breakthroughs including the first hybrid silicon laser co-developed with the University of California at Santa Barbara in 2006 as well as high-speed optical modulators and photodetectors announced in 2007. The transmitter chip is composed of four such lasers, whose light beams each travel into an optical modulator that encodes data onto them at 12.5 Gb/s. The four beams are then combined and output to a single optical fiber for a total data rate of 50 Gb/s. At the other end of the link, the receiver chip separates the four optical beams and directs them into photodetectors, which convert data back into electrical signals. Both chips are assembled using low-cost manufacturing techniques familiar to the semiconductor industry. Intel researchers are already working to increase the data rate by scaling the modulator speed as well as increase the number of lasers per chip, providing a path to future terabit/s optical links — rates fast enough to transfer a copy of the entire contents of a typical laptop in one second. Sources: www.intel.com, www.photonics.com
LIQUID ELECTRICITY Saurabh Gupta, 3rd year, EE Dept Transportation is one of the fastest-growing energy demand sectors, having a close association with oil. While, for example, we can use many different fuels to make electricity, the same is not true right now for transportation; globally, about 98 percent of transportation runs on fuel made from oil. So need arises to think upon alternatives for transport fuels. One of the recent developments across the globe has been the introduction of battery driven vehicles. The reason why electric cars aren’t everywhere is simple — at the end of their range, they have to be stationary for hours while the batteries are recharged. This is a pity, because even cars recharged from ‘dirty’ power stations are three times more environmentally friendly than conventional vehicles. That’s because only 20 per cent of the energy from gasoline or diesel actually reaches the wheels, whereas in an electric car, it’s 60 per cent. Concept of Liquid Electricity: The basic principle involved with the refueling of electric cars is the recharging of the spent electrolyte in the battery via an external media. This is what that has been consuming a lot of time thus deterring the human race to completely switch over to electric drives. What recharging does is to change the state of the electrolyte fluid in the batteries. The Dutch Innovation Network has come up with the idea of just pumping up the spent electrolyte out and pump in freshly charged electrolyte — literally, liquid electricity. This would take little more time than filling up with fossil fuel and the spent electrolyte can be recharged and re-sold. “Liquid electricity” takes the form of a Vanadium Redox battery – technology which was pioneered by the University of NSW. “With an electrolyte solution the consumer delivers the spent electrolyte back to the ‘filling station’ where it is recharged by either local power generation or the national power network”- says Peter Oei, the project manager at the innovation network. Device for Treatment of Electrolyte: A cylindrical electrolytic treatment vessel containing an electrolyte and serving as one electrode, rotatably mounts in a concentric manner a cylindrical rotatory body which is mounted for rotation vertically about it’s axis. The cylindrical rotatory body has enlarged diameter portions at opposite axial ends and at least the peripheral surfaces of the enlarged diameter portions are made up of an isolative material. The rotor body has central smaller diameter portions intermediate of the enlarged diameter portions which is formed of electrode and constitutes the second electrode. The metal edge is rotatably supported along
opposed side edge portions of the web by the electrically isolative surfaces of the enlarged diameter portions of the body so that continuous electrolytic treatment is feasible at the metal web as it passes through the electrolyte. The design of the device was proposed by Kazutaka Oda. Advantages: Liquid electricity provides us a chance to leave behind the technology involving the emission of fine dust, carbon dioxide and noise. It provides farmers waith a means to supplement their income by providing them a chance to utilize the space on their properties to build wind turbines, solar collectors or biomass plants. And it would end the use of food plants such as corn and sugar cane to produce ethanol, a practice that is already driving the price of food almost beyond the reach of the world’s poorest populations. The Innovation Network even coined the term ‘photo farmers’ for such people. We currently spend huge amounts of time and energy getting oil from various locations, refining and transporting it to local fuel stations. Therefore this stresses the need for an alternate technique to power our engines. Drawbacks: Although it sounds like a great idea for electric car transport – filling up with ‘recharged electrolyte’ –but the cost effectiveness of the idea is under question regarding the energy that will be needed to transport the electrolyte from the suburban filling stations back to the power station for recharging. An effective method of handling and storage of electrolyte also needs to be found; otherwise a lot of effort would certainly go down the drain in terms of wastage of the effort of charging. And in the end even if this project goes successful then the problem that can be possibly encountered is the inability for the national power networks to meet the demand if everyone were to switch to this solution. Future of the Project: At present Dutch government think tank, the Innovation Network in the city of Utrecht is working on a solution: liquid electricity you can buy at photon farms. They intend to effectively bring this technology in use by 2025. Efforts are being made to adjust the cost of recharging the electrolyte. Its transportation is certainly another big aspect they need to ponder upon. However off late the picture has been quite clear now and this project has very bright chances regarding being technically and economically feasible some day, given the holistic nature of its approach.
The fourth fundamental circuit element
Ashish Chittora, 2nd year E&C, M.E. Aditya Sharat, SE-2K9 Have you ever thought about the “fourth fundamental circuit element” besides Resistor, Inductor and Capacitor? Theoretically it was proposed in 1971, by Leon Chua of UC, Berkeley, and now it’s most close practical form is invented with the name MEMRISTOR.
Now by memristor, relation between charge and flux is established as following:
What is a memristor?
Memristor symbol It is basically a 2 terminal electronic element whose resistance is charge dependent. It’s called ‘memristor’ because it can ‘remember’ how much current has passed through it. So by changing the amount of current passed through it we can save electronic states in it more than just ‘1’s and ‘0’s, making it a very good alternative for today’s flash memory. How memristor is made? It is made by TiO2 sandwiched between platinum electrodes. The upper layer of TiO2 is made oxygen deficient by chemical processes. Thus ‘holes’ are created in upper layer making it more conductive.
How memristor works? Since the upper layer of memristor is more conductive due to holes, when positive voltage is applied, holes start spreading in the memristor resulting in increase in conductivity while conductivity decreases on opposite bias. One more interesting fact is that when voltage is removed, the holes get fixed at their last position and conductivity becomes constant .Thus the memristor becomes capable to remember its last resistance. Next time whenever you turn ON the memristor it will start from the same remembered resistance Advantages: It means the memristors can actually replace the flash memory chips we use today. Memristors can theoretically provide cheaper and far faster flash memory devices. It will also allow far greater memory densities, i.e. store more data in lesser area, making your storage devices more compact. They have shown that they are fast enough already to replace RAM chips. So with a memristor RAM in your PC even after you switch off your computer it will remember exactly what it was doing, you will be able to return to work instantaneously. This lowering of cost and increase in speed and efficiency while reducing the size of the components will lead to affordable solid state devices that will be small enough to carry around and will be many times faster than today’s Personal computers. This is quite just the beginning; soon a new generation on computer can be spawned with this technology. Today computer work on Binary system i.e. ‘1’ or ‘0’. Today hardware and electronic manufactured can only interpret data in its raw binary form limiting its speed and adaptability. Thanks to memristor which have the ability to remember a wide range of electrical states, new generation hardware components will be able to work faster and perform far more complex tasks more efficiently. HP plans to offer flash memory chips by 2012, and DRAM by 2014-1016. Applications of memristor: Non volatile ram (nvram): Since a single memristor is going to replace a complete flip-flop circuit (approximate 10 transistors), you can calculate by how many times the size of our memory systems is going to decrease (or their capacity is going to increase almost 10 times in same size). Artificial synapses: Memristors can be used as artificial synapses in neural networks. FPGA’s (field programmable gate arrays)
A Different Look at the Binary Numbers Kriti Agarwal & Neha Sangwan, 2nd year
You must have studied hard core electronics, digital systems or complex architecture but have you ever thought that the most basic and easiest thing in digital systems which comprises the first chapter of most of the books can offer you some of the most mind-boggling facts.
2. Instead of 8 we will use (1000)2. 3. And lastly all operations like addition and multiplication must be performed as per binary rules. So, now the equation will look like
So itâ€™s time to revisit those boring first chapters in an exciting way. If you have gone through a basic course on binary numbers then you must recognize the formula: an.rn+an-1.rn-1+........+a2.r2+a1.r+a0+a-1.r-1+a-2.r-2+........+a-m.r-m which converts a number (anan-1......a1 a0.a-1......a-m)r in base-r to its decimal equivalent . Ever wondered why this formula converts a number in any system to decimal system only and not any other system. If all number systems are equivalent and differ only in the choice of their base then all should be governed by same set of rules and all rules and formulas must be general and must hold for all systems. Then why this formula is so specific? Letâ€™s analyse the things more closely and have a closer look at what is actually happening in this conversion. Take an example : (16)8=1*81+6*80 =8+6 =14 Now, what we did is, we multiplied 1 and 6 with powers of 8 (as per the position of 1 and 6) and added the result. What if, we wish to convert (16)8 directly to its binary equivalent using the same formula? The major problems in this conversion will be1. Nothing like 1 and 6 exists in binary so multiplying them with any number like 8r or its equivalent will not give the desired result. 2. Like 1 and 6, 8 is also not in binary. 3. The operations like addition and multiplication which are performed here are done for decimal numbers. So, if we use binary equivalents for all the above stated problems, we may arrive at the correct result. So what we will do is1. Instead of 1 and 6 we will use their binary equivalents which are (1)2 and (110)2.
1*(1000)1+110*(1000)0 =1000+110 = (1110)2 And (1110)2 is equivalent to (14)10 (as 1*8+1*4+1*2+1*0=14) So we arrived at the correct result. Now the formula an.rn+an-1.rn-1+........+a2.r2+a1.r+a0+a-1.r-1+a-2.r-2+........+a-m.r-m is a general formula which can convert any number (anan......a1 a0.a-1......a-m)r in base-r to any other number in 1 base-R provided that we write an,an-1,..a1,a0,a-1...,am and r in their base-R equivalents and perform all operations as per base-R Now, let us experiment with two numbers in hexadecimal, 15 and 567. Their binary equivalents are 0001 0101 and 0101 0110 0111. Notice that whether 5 comes in ones position or hundredths position it has got the same value i.e. 0101. So digits in hexadecimal are independent of the position they occupy, and so are in octal. But it doesnâ€™t exist there in decimal. Consider it this way, if we take base-2 system then any two bits in base-2 has got 4 different combinations so that every combinations can be completely assigned to different values of a single digit in base-4 like 00 is assigned to 0,01 to 1,10 to 2 and 11 to 3. So every digit in base-4 is equivalent to two digits (bits) in base-2 . So if we write (23)4, it will be equal to 10 11 and (32)4 is 11 10 so that 3 and 2 are independent of the position they occupy 3 is always 11 and 2 is always 10 because these are the values they are assigned . Now if we take three bits they will have 8 combinations so they will correspond to base-8 , that is, octal. Same argument holds true for hexadecimal. So, now every digit of base-2 2 system (base-4) has two equivalent bits,base-23 (octal) has three equivalent bits ,base-24 has four equivalent bits and so on. This argument can be generalised to base-3,32,33....etc and base-5 and so on. The crux is that each number has the same equivalent
A Different Look at the Binary Numbers Kriti Agarwal & Neha Sangwan, 2nd year irrespective of its position. Is it not great that all the digits in hexadecimal can be independently be converted to binary system without caring about their neighbours at all!! But in the case of decimal to binary conversion we can’t do such a thing. Why no direct way conversion exists in one system where as in another it does?Well the answer of this fact lies in the origin of hexadecimal, octal and systems like that i.e. base 2n. Basically, when we convert any number in decimal to its binary equivalent what we do is simple repeated divisions by 2 and noting down the remainders. Hexadecimal, octal and such systems were invented so as to make this process easy and eliminate redundancy. The remainders are clubbed and the higher systems are developed. For example, consider any number in decimal number system. Now to convert it into binary we divide it by repeatedly and note the remainders. But it is a slow process and takes a lot of time. So, instead we have assigned numbers to a group of remainders in the higher systems.
significant bit in BCD and the decimal carry.
Till now we have learnt two facts: • •
The radix system or the base system of conversion is a general system not exclusively reserved for decimal. The independency and reduction in redundancy in the conversion of 2n base system to binary is the cause of their invention.
But don’t be disheartened there is a direct method of conversion of decimal to binary and vice- versa. It’s the process used in today’s computer system i.e. using BCD. Yes, Binary coded Decimal. Let’s understand it using any number say 10. Its binary coded decimal is 0001 0000. Now to convert it into binary subtract 6,0110 from it and we get its binary equivalent , 1010. Why six? When we take different combinations of 4 bits together, we get 16 different combinations but in BCD, we define all the 10 digits of base-10 by associating them with 10 combinations while the rest of the 6 combinations are skipped 1010, 1011, 1100, 1101, 1110 and 1111 don’t exist in BCD numbers. So, in converting numbers from 10 to 19 we will subtract 6. While from 20 to 29, 12 as in this we have skipped 12 combinations. From 30-39 subtract 18 and so on, that is, multiples of 6 = 24-101. It follows that for 100-109 we would have to subtract 28-102=156. And for numbers above it we will have multiples of 156. Basically the difference of most
IMAGE RESTORATION Shobha Tyagi, 2nd year, Information Tech., M.Tech
The purpose of image restoration is to “compensate for” or “undo” defects which degrade an image. Degradation comes in many forms such as motion blur, noise, and camera misfocus. In cases like motion blur, it is possible to come up with a very good estimate of the actual blurring function and “undo” the blur to restore the original image. In cases where the image is corrupted by noise, the best we may hope to do is to compensate for the degradation it caused. Degradation Model The block diagram for our general degradation model is
where y is the corrupted image obtained by passing the original image x through a low pass filter (blurring function) h and adding noise to it. We present four different ways of restoring the image. If we know of or can create a good model of the blurring function that corrupted an image, the quickest and easiest way to restore that is by inverse filtering. Unfortunately, since the inverse filter is a form of high pass filer, inverse filtering responds very badly to any noise that is present in the image because noise tends to be high frequency. We can model a blurred image by where f is the original image, b is some kind of a low pass filter and g is our blurred image and * * connotes convolution . So to get back the original image, we would just have to convolve our blurred function with some kind of a high pass filter But how do we find h? If we take the DFT of b so that B=DFT2(b), we would get something that looks like this
some extremely high values. In order to avoid these values, we will need to set some sort of a threshold on the inverted element. So instead of making a full inverse out of B, we can an “almost” full inverse by
So the higher we set , the closer H is to the full inverse filter. Implementation and Results Case I: When no noise is present in the image Matlab Code: N=256; n=.0001; f=imread(‘cameraman.tif’); figure(1) imagesc(f);title(‘original image’); colormap(gray) b=ones(4,4)/4^2; F=fft2(f); B=fft2(b,N,N); G=F.*B; figure(2) imagesc(abs(ifft2(G)));title(‘blurred image’); colormap(gray) BF=find(abs(B)<n); B(BF)=n; H=ones(N,N)./B; I=G.*H; im=abs(ifft2(I)); figure(3) imagesc(im);title(‘restored image’); colormap(gray) Since Matlab does not deal well with infinity, we had to threshold B before we took the inverse. So we did the following:
In the ideal case, we would just invert all the elements of B to get a high pass filter. However, notice that a lot of the elements in B have values either at zero or very close to it. Inverting these elements would give us either infinities or
where n is essentially and is set arbitrarily close to zero for noiseless cases. The following images shows our results for n=0.0001.
IMAGE RESTORATION Shobha Tyagi, 2nd year, Information Tech., M.Tech
im=abs(ifft2(I)); figure(3) imagesc(im);title(‘restored image’); colormap(gray) imagesc(im) colormap(gray) Because an inverse filter is a high pass filter, it does not perform well in the presence of noise. There is a definite tradeoff between de-blurring and de-noising. In the following image, the blurred image is corrupted by AWGN with variance 10. n=0.2.
restored image 200
50 250 50
We see that the image is almost exactly like the original. The MSE is 2.5847. Case II: when the gaussian noise is present in the image Matlab Code: N=256; n=.2; f=imread(‘cameraman.tif’); figure(1) imagesc(f);title(‘original image’); colormap(gray) b=ones(4,4)/4^2; F=fft2(f); B=fft2(b,N,N); G=F.*B; g=ifft2(G)+10*randn(N,N); G=fft2(g); figure(2) imagesc(abs(ifft2(G)));title(‘blurred image’); colormap(gray) BF=find(abs(B)<n); B(BF)=n; H=ones(N,N)./B; I=G.*H;
The MSE for the restored image is 1964.5. We can see that the sharpness of the edges improved but we also have a lot of noise specs in the image. We can get rid of more specs (thereby getting a smoother image) by increasing n. In general, the more noise we have in the image, the higher we set n. The higher the n, the less high pass the filter is, which means that it amplifies noise less. It also means, however, that the edges will not be as sharp as they could be.
VISIBLE LIGHT COMMUNICATION Bhavin V Kakani, M.E., 2nd year, ECE
VLC is a technology using visible light from any emitting device as a medium to transmit information in open space. Through prototypes development it has been shown that visible light can indeed be used as a medium for shortrange, wireless communication. It is the most advanced communication technology using visible light; the visible light is everywhere around in our daily lives. We are heavily relying on our eyes to gather almost all information for our day-to-day activities. ”visibility” is one of the most important thing for human being, and many devices are developed to assist our visibility. For instance, there are many devices including the lightings in our offices, home, on road, traffic signals, commercial displays, small lamps on electronic home appliances. Essentially all LED light sources can become information sources or beacons, in addition to the usual function utilized in visual illumination or display. Both digital and audio information can be transmitted using visible light. •
The use of optical emission to transmit information has been used since antiquity. Fire beacons were lit between mountain tops in order to transmit the messages over distances. People also use mirror reflection for information delivery which is commonly known as “Heliograph”. In early 1790s, Claude chappe invented the optical telegraph which was able to send messages over distances by changing the orientation of signalling arms on a large tower. A codebook of orientation of signalling arms was developed to encode letters of the alphabet, numerical, common word, and control signals. One of the earliest optical communication devices using electronic detectors was the photo phone invented by A.G.Bell and C.S.Tainter and patented on Dec 14, 1880.The system is designed to transmit an operator’s voice over a distance by modulating reflected light from the sun on a foil diaphragm.The receiver consisted of selenium crystal which converted the optical signal into an electric current. •
LED AS A MEDIUM
VLC using LEDs is emerging as a key technology for a ubiquitous communication system because LED
has numerous advantages over other medium of communication like fast switching, long life expectancy, being less expensive and being safe for the human body. The VLC system is expected to undergo rapid progress, inspiring numerous indoor and outdoor applications. It has the ability to transfer the data at the rate of 500 megabits/ sec and the resulting changes in brightness are remaining imperceptible to the human eye. The main advantage of using LED is that it is environmental friendly and energy effective too. It consumes low power as compared to other fluorescent lamps. This is useful in battery powered or energy-saving devices. The solid package of the LED can be designed to focus its light.LED, being solid state components, are difficult to damage with external shock. They light up very quickly. A typical red indicator LED will achieve full brightness in microseconds. LEDs can be very small and are easily populated onto printed circuit boards. •
MODULATION SCHEME USED
There are four popular baseband modulation schemes used in VLC: Pulse position modulation(PPM): 1. 2-PPM, 4-PPM 2. I-PPM Sub-Carrier Pulse position modulation(SC-PPM): 1. SC-PPM 2. SC-I-PPM Sub-Carrier Frequency shift keying(SC-FSK) Sub-Carrier phase shift keying(SC-PSK) The output uses OOK(on/off keying) , one kind of intensity modulation direct detection(IM-DD). •
The audio signal from the source has small amplitude and hence amplification of this audio signal is necessary. The audio amplifier is used to amplify the weak signal and to shift the average voltage level to the level so that the signal is within the capture range of VCO. The VCO chip is used to modulate the incoming signal variations from the audio amplifier and generate the FM signal. A square wave VCO is used because there are only two sates for the LEDs. The carrier frequency is set at 100 KHz with a deviation of 5 KHz. The modulated signal is then transmitted by s/w of the LEDs.
VISIBLE LIGHT COMMUNICATION Bhavin V Kakani, M.E., 2nd year, ECE
• The photo detector circuit consists of a photodiode and a resistor. Since the signal from the photo detector circuit is small, amplification is needed for the next stage. The limiting preamplifier circuit is used for amplification. The circuit aims to amplify the input signal to a certain level and a comparator is used to produce rectangular pulses. Then by bandpass filtering and amplification the output is feed to the speaker.
The challenges which must be cop up to make this novel technology fruitful are certain like: 1. Ambient Interference: 1. Obstacles : must obtain a LOS 2. Interferences from other visible sources (sun bulbs etc.) 3. Suffer from multi path effect as light reflects. 2. Improving data rate 3. Providing an uplink 4. Compatibility with illumination 5. Parallel data communication
VLC has a variety of potential applications. In the home, for example, it could represent a valuable addition to established WLAN technology. Increasingly, wireless networks are compromised by the fact that in many buildings the three independent WLAN frequency bands are multiply occupied, which leads to collisions among the data packets. In a situation like this, visible light, as a currently unused and license-free medium, offers a suitable alternative. A further advantage is that this form of data transfer is impervious to interception. Only the photodetector that is positioned directly within the light cone is able receive the data. In other words, it is impossible to “tap” the data transported in the light beam. There is also a need for this type of data transfer in factory and medical environments, where in certain areas radio-borne transmission is either impossible or only a limited option. A further application is in the field of transportation, where LED stoplights or railroad signals could be used to transmit information to cars or trains. Other applications are intelligent transport system and under water communication.
Wireless Transmission System
Palak Jain, 3rd year, EE
Our forefathers marveled at the invention of glowing light bulbs by Thomas Edison in 1879. However, to us 21st centurions, the light bulb is nothing out of the ordinary. When computers, cell phones, laptops, iPods etc. were invente , our antennas tweaked. Each appliance has its own set of chargers, and with every family member owning their cell phones, the drawers are overflowing with all sorts of wires. When you are on the way to work and your cell phone beeps in hunger for a battery charge, haven’t you wished for your cell phone battery to get ‘self charged’. Well your plight has been heard by doctor ‘WiTricity’ Wireless power transmission is not a new idea, Nikola Tesla proposed theories of wireless power transmission in the late 1800s and early 1900s. He made what he regarded as his most important discovery-- terrestrial stationary waves. By this discovery he proved that the Earth could be used as a conductor and would be as responsive as a tuning fork to electrical vibrations of a certain frequency. He also lighted 200 lamps without wires from a distance of 25 miles( 40 kilometers) and created manmade lightning. He charged capacitors to high voltages and discharged in very short time intervals. These very short pulses produced very sharp shockwaves. These waves radiated out and penetrated metal, glass and every other kind of material. This was clearly not an electromagnetic wave, so he called the new wave “Radiant Electricity”. And these waves is used to lit the bulbs. Tesla’s work was impressive. Since then many researchers have developed several techniques for moving electricity over long distances without wires. Some exist only as theories or prototypes, but others are already in use. In 2006 researchers at Massachusetts Institute of Technology led by Marine Soijacic discovered an efficient way to transfer power between coils separated by a few meters. They demonstrated that by sending electromagnetic waves around in a highly angular waveguide, evanescent waves are produced, which carry no energy. An evanescent wave is a near field standing wave exhibiting exponential decay with distance. Evanescent waves are always associated with matter, and are most intense within one-third wavelength from any radio antenna. Evanescent means tends to vanish, the intensity of evanescent waves decays exponentially with the distance from the interface at which they are formed. If a proper resonant waveguide is brought near the transmitter, the evanescent waves can allow the energy to tunnel to the power drawing wave guide. Since the electromagnetic waves would tunnel, they would not propagate through the air to be absorbed or dissipated and would not disrupt .electronic devices or cause physical injury
like microwave or radio waves transmission. They have dubbed this technology as witricity. ADVANTAGES OF WITRICITY 1. No need of line of sight - In witricity power transmission there can be any line of sight between transmitter and receiver. i.e. power transmission can be possible even if any obstructions like wood , metal or other devices are placed in between the transmitter and receiver. 2. No need of power cables and batteries - Witricity replaces the use of power cables and batteries. 3. Does not interfere with radio waves 4. Wastage of power is less - Electromagnetic waves would tunnel, they would not propagate through air to be absorbed or dissipated. So the wastage is small. 5. Negative health implications - By the use of resonant coupling wave lengths produced are far lower and thus make it harmless. DISADVANTAGES : 1. Wireless power transmission can be possible only in few meters. 2.Efficiency is only about 40%. As witricity is in development stage, lot of work is done for improving the efficiency and distance between transmitter and receiver. APPLICATIONS : Witricity has a bright future in providing wireless electricity. There are no limitations in witiricity power applications. Some of the potential applications are powering of cell phones, laptops and other devices that normally run with the help of batteries or or plugging in wires. Witricity applications are expected to work on the gadgets that are in close proximity to a source of wireless ‘power, where in the gadgets charges automatically without necessarily, having to get plugged in. By the use of witricity there is no need of batteries or remembering to recharge batteries periodically. If a source is placed in each room to provide power supply to the whole houseWitricity has many medical applications. It is used for providing electric power in many commercially available medical implantable devices.Another application of this technology includes transmission of information. It would not interfere with radio waves and it is cheap and efficient. In nutshell, Witricity is in development stage, lots of work is to be done to use it for wireless power applications. Currently the project is looking for power transmission in the range of 100w. Before the establishment of this technology the detailed study must be done to check whether it cause any harm on any living beings.
IEEE DTU STUDENT COUNCIL 2010-11
VICE-CHAIRPERSON AND GENERAL SECRATORY
HEAD, TECHNICAL AFFAIRS
HEAD, TECHNICAL SUPPORT-HARDWARE
HEAD, TECHNICAL SUPPORT-SOFTWARE
HEAD, TECHNICAL SUPPORT-RESEARCH
HEAD, WEB MANAGEMENT AND DEVELOPMENT HEAD, CORPORATE AFFAIRS AND INDUSTRIAL INTERFACE HEAD, HUMAN RESOURCES AND PUBLIC RELATIONS
ANIRVANA MISHRA AKSHAY BAHL MEGHNA ARORA MANISHA TANWAR
HEAD, LOGISTICS AND OPERATION