Wednesday, November 27, 2019

The Origins and Specificity of Parasites free essay sample

This paper discusses the origins of life and the role that parasites play in its continuity. This paper examines the origins of parasites, their relationship to their host and how they have evolved in tandem with many other organisms. The paper seeks to answer several questions including why parasites live where they do and how the origins of evolution affect different parasites, specifically RNA and what role protozoans play in the life of parasites. The paper also discusses the process of Co-evolution and the effect that a parasites long-term residence has on the body of different species, including humans. However, it is once an organism has taken up residence inside another organism, that a second and crucial process comes into play. This is the process of Co-evolution. Co-evolution is based relatively simply on the fact that Evolution is a non-stop process. All species are continually changing and developing. Genetic mutations, errors in the copying of DNA and RNA, lead to minute, or even at times, dramatic changes that might be either beneficial or maladaptive. We will write a custom essay sample on The Origins and Specificity of Parasites or any similar topic specifically for you Do Not WasteYour Time HIRE WRITER Only 13.90 / page In the normal course of things the maladaptive forms will die out, while the successful adaptations will survive as a result of those organisms that possess them living on to reproduce. The same process of evolution is at work both in host and parasite. As the host itself changes, the environment inside it changes as well. Subtle differences in conditions might mean death a microorganism living inside the body of another animal.

Sunday, November 24, 2019

Free Essays on C Programming In Steps

1. Introduction. C is a computer language available on the GCOS and UNIX operating systems at Murray Hill and (in preliminary form) on OS/360 at Holmdel. C lets you write your programs clearly and simply it has decent control flow facilities so your code can be read straight down the page, without labels or GOTO's; it lets you write code that is compact without being too cryptic; it encourages modularity and good program organization; and it provides good data-structuring facilities. This memorandum is a tutorial to make learning C as painless as possible. The first part concentrates on the central features of C; the second part discusses those parts of the language which are useful (usually for getting more efficient and smaller code) but which are not necessary for the new user. This is not a reference manual. Details and special cases will be skipped ruthlessly, and no attempt will be made to cover every language feature. The order of presentation is hopefully pedagogical instead of logical. Users who would like the full story should consult the "C Reference Manual" by D. M. Ritchie [1], which should be read for details anyway. Runtime support is described in [2] and [3]; you will have to read one of these to learn how to compile and run a C program. We will assume that you are familiar with the mysteries of creating files, text editing, and the like in the operating system you run on, and that you have programmed in some language before. 2. A Simple C Program main( ) { printf("hello, world"); } A C program consists of one or more functions, which are similar to the functions and subroutines of a Fortran program or the procedures of PL/I, and perhaps some external data definitions. main is such a function, and in fact all C programs must have a main. Execution of the program begins at the first statement of main. main will usually invoke other functions to perform its jo... Free Essays on C Programming In Steps Free Essays on C Programming In Steps 1. Introduction. C is a computer language available on the GCOS and UNIX operating systems at Murray Hill and (in preliminary form) on OS/360 at Holmdel. C lets you write your programs clearly and simply it has decent control flow facilities so your code can be read straight down the page, without labels or GOTO's; it lets you write code that is compact without being too cryptic; it encourages modularity and good program organization; and it provides good data-structuring facilities. This memorandum is a tutorial to make learning C as painless as possible. The first part concentrates on the central features of C; the second part discusses those parts of the language which are useful (usually for getting more efficient and smaller code) but which are not necessary for the new user. This is not a reference manual. Details and special cases will be skipped ruthlessly, and no attempt will be made to cover every language feature. The order of presentation is hopefully pedagogical instead of logical. Users who would like the full story should consult the "C Reference Manual" by D. M. Ritchie [1], which should be read for details anyway. Runtime support is described in [2] and [3]; you will have to read one of these to learn how to compile and run a C program. We will assume that you are familiar with the mysteries of creating files, text editing, and the like in the operating system you run on, and that you have programmed in some language before. 2. A Simple C Program main( ) { printf("hello, world"); } A C program consists of one or more functions, which are similar to the functions and subroutines of a Fortran program or the procedures of PL/I, and perhaps some external data definitions. main is such a function, and in fact all C programs must have a main. Execution of the program begins at the first statement of main. main will usually invoke other functions to perform its jo...

Thursday, November 21, 2019

Gastroesophageal Reflux Disease Essay Example | Topics and Well Written Essays - 4500 words

Gastroesophageal Reflux Disease - Essay Example The superficial layer of the esophagus is known as adventitia. The adventitia attaches the esophagus to surrounding structures. The esophagus secretes mucus and transports food into the stomach but it does not produce digestive enzymes. The stomach contents can reflux (back up) into the inferior portion of the esophagus when the lower esophageal sphincter fails to close adequately after food has entered the stomach; the condition which is known as gastroesophageal reflux disease (GERD). In GERD, the basal layer of of the epithelium is is thickened, and the papillae of the lamina propria are elongated and extend toward the surface. Abnormalities of the lower esophageal sphincter, Hiatus hernia, Delayed esophageal clearance, Gastric contents, Defective gastric emptying, Increased intra-abdominal pressure, and Dietary and environmental factors may be the factors involved in GERD. Clinically, GERD may be diagnosed by radiographic examinations using dyes, endoscopy, and esophageal pH-metr y. PPI trial is also an accepted method of first line diagnostic test. Treatment involves both non-pharmacological treatment measures as well as pharmacotherapy. Non-pharmacological treatment include avoidance of foods and medications that exacerbate GERD, smoking secession, weight reduction, taking a small meal at a time, avoidance of alcohol and elevation of the head of the bed. Drugs for treating GERD include H2 receptor antagonists like cimetidine, ranitidine, famotidine or nizatidine. Proton pump inhibitors such as omeprazole, pantoprazole, lansoprazole and rebeprazole are found to be very effective to reduce acid reflux and hence inflammation. H2 blockers were found to heal ulcers and erosions but the typical reflux changes of the squamous epithelium of the esophageal mucosa are not recovered. Studies confirmed that PPIs not only heal ulcers and erosions but also cures basal cell hyperplasia and elongation of the papillae. The percentage of normal epithelium were also reported to increase significantly after PPI administration in patients with GERD. Two commonly employed treatment alternatives for GERD are antirefluctive surgery and Endoscopic Treatments. Surgery is preferred only when the patient fails to respond to pharmacological treatment, when the patient prefers surgery, in those patients who have complications of GERD like barrett's esophagitis, or if the patient has atypical symptoms and reflux documented on 24-hour ambulatory pH monitoring. Several randomized controlled trials have confirmed that open fundoplication and medical treatment have similar long-term effects for GERD. Laparoscopic antireflux surgery is also reported to have similar outcomes to the open procedures. Endoscopic Treatments for GERD include plicating gastric folds methods (Endoscopic Gastroplication, ELGP method, thermal tissue remodeling /neurolysis method, and bulking injection method. These endoluminal treatments augment the reflux barrier. Section I: Organ system structure and Function. Explain the accepted normal healthy structure of parts and function of organ system; How it works The esophagus is a collapsible muscular tube, about 25 cm (10 in.) long that lies posterior to the trachea. The esophagus begins at the inferior end of the laryngopharynx and passes through the mediastinum anterior to vertebral column. Then it pierces the diaphragm through an opening called the

Wednesday, November 20, 2019

Traditional Chinese Culture Essay Example | Topics and Well Written Essays - 1000 words - 10

Traditional Chinese Culture - Essay Example The Chinese traditional distinct language is a cultural value that establishes a mutually tolerant and universally embraceable world order. Language is a communication symbol that connects the Chinese citizens with external contacts. This is because a person has to learn the traditional Chinese language in order to transact business with the locals (Zhang 9). The distinct grammatical and phonological set up of the language inspires interest among foreigners. Similarly, the unique writing style requires one to understand the sentence formation for easy communication. For instance, business interactions require the usage of a common communication model understood by all partners. The traditional Chinese language has been studied across the world because the country boasts of sophisticated industrial and technological advancements (Zhang10). As a result, this has inspired a mutually tolerant and universally embraceable world order where people from diverse backgrounds come together to l earn a common language. Religion is always a unifying element that brings together believers to embrace and accept each other. China has three main religious denominations that people profess to for spiritual nourishment and divine intervention. The Confucianism, Buddhism and Taoism have contributed to the Chinese civilization through their spiritual teachings. Buddhism is the most practiced religion in the country and spreads across other Asian nations (Zhang 15). The social and ethnic relevance of the religious associations has enabled the believers to develop their generation. It is apparent that Buddhism has had a remarkable contribution to Chinese civilization because most of the words and phrase used in the country have roots in the Buddhist origin.

Sunday, November 17, 2019

Discussion board reply Assignment Example | Topics and Well Written Essays - 250 words - 6

Discussion board reply - Assignment Example do not understand that in the fast paced global operations within business sector, it is significant to appoint HR policies that are competitive and goal-oriented. Thus, with the implication of HRD, many organizations can bring betterment in their working schema (Gilmore & Williams, 2012). The post also incorporates the challenges that the chosen company may undergo. This can surely be followed as a roadmap when it would come to selecting HRD implicated approaches. The core compliance of the policies of HRD with employment population and diversity remains yet another significant pointer that has been discussed in a very effective manner. I would like to provide an addition to the written post i.e. HRD implication to any organization needs a thorough evaluation. It is not easier to conduct HR policies until or unless a thorough research is undertaken regarding the working asset of the company i.e. its workforce which remains widely diversified and effective (Gilmore & Williams,

Friday, November 15, 2019

Effects of Theatre Arts on Emotional Intelligence

Effects of Theatre Arts on Emotional Intelligence This study has attempted to examine the impact that an individuals involvement in Theatre Arts has on his or her Emotional Intelligence (EI). The hypothesis in the present research is thus, there is a positive relationship between ones involvement in theatre arts and their emotional intelligence. Participants of this study were residents of Bangalore city, India (N=80). The scale which was employed in this research to administer on the sample was the Emotional Intelligence Scale, developed by Anukool Hyde, Sanjyot Pethe and Upinder Dhar. The findings of the study were such that individuals who have been active participants of theatre arts had a higher EI (M=138.67) than those individuals who were not exposed to the theatre arts (M=129.65). These results indicate that exposure to, participation in and the understanding of the theatre arts is highly useful in emotional, and hence mental well-being. EMOTIONAL INTELLIGENCE AND THEATRE ARTS Emotional intelligence (EI) is defined by Cooper and Sawaf (1997) as the ability to sense, understand and effectively apply the power and acumen of emotions as a source of human energy, information, connection and influence. It comprises of the power to perceive accurately, evaluate and express emotions; the ability to comprehend emotions and emotional knowledge and intellectual growth. It also is characterized by- self awareness, mood management, self motivation, empathy, managing relationships. The most extensively recognized definition of emotional intelligence, is that given by Peter Salovey and John D. Mayer, who have been leading researchers in the field, and is defined as the subset of social intelligence that involves the ability to monitor ones own and others feelings and emotions, to discriminate among them and to use this information to guide ones thinking and actions (1990). What popularized the study of emotional intelligence is the publication of Golemans bestselling Emotional Intelligence in 1995. This model introduced by Daniel Goleman places its focus on leadership performance guided by a large collection of competencies and skills by means of emotional intelligence (Goleman, 1988). Golemans model demarcates four main EI constructs, namely, self-awareness, which is the ability to construe ones emotions and understand their influence while using intuitions and instincts to direct decisions; self-management, that which has to do with controlling ones emotions and impulses and adjusting in new situations; social awareness, the ability to discern, comprehend, and respond to others emotions; and relationship management, the ability to motivate, influence, and develop others while dealing with difficult situations (Bradberry, Travis and Greaves, Jean, 2009). The origins of this subject can be traced back to Darwins work on the importance of emotional expression for survival (Bar-On, 2006). In The Expression of the Emotions in Man and Animals (1872), Darwin put forth that human emotional expressions have an adaptive and survival value, and that this feature has its consequences in its evolution. However, he posited that there are some human reactions which are not of significant survival value now, but were in the past, and that this, coupled with a similarity of emotional expression among all human beings suggests a common descent from an earlier pre-human ancestor (Encyclopaedia of Psychology, n.d.). In the twentieth century, publications began appearing with the work of Edward Thorndike on social intelligence in 1920, which described the skill of understanding and managing other people (Bar-On, 2006). Many of these early studies focused on describing, defining and assessing socially competent behaviour (Chapin, 1942; Doll, 1935; Moss Hunt, 1927; Moss et al., 1927; Thorndike, 1920). This was then followed by studies on the influence of non-intellectual factors on intelligent behaviour, by D Wechsler (as cited in Bar-On, 2006) and the concept of multiple intelligences as put forth by Howard Gardner in 1983 (Smith M.K., 2002). In the recent years the study of emotional intelligence has escalated. Research includes areas ranging from emotional intelligence and its relationship with work place and social competencies to its influence on a healthy and productive life as such (Consortium for Research on Emotional Intelligence in Organizations, http://www.eiconsortium.org/about_us.htm) . For example, emotional intelligence has become increasingly popular as a measure for identifying potentially effective leaders, and as a tool for developing effective leadership skills (Palmer, Walls, Burgess, Stough, Leadership and Organization Development Journal, 2001). In the study mentioned, emotional intelligence correlated with several components of transformational leadership suggesting that it may be an important component of effective leadership. In particular emotional intelligence may account for how effective leaders monitor and respond to subordinates and make them feel at work. Further in a study conducted by the Center for Creative Leadership, in the USA, individual scores as obtained by a multi-rater feedback tool called Benchmarks were compared to self-reported emotional intelligence as measured by the BarOn EQ-I, and the findings were that key leadership skills and perspectives are related to aspects of emotional intelligence and the absence of emotional intelli gence was related to career derailment (Leadership Skills and Emotional Intelligence, Center for Creative Leadership, http://www.ccl.org/leadership/pdf/assessments/skills_intelligence.pdf, 2003). The study of emotional intelligence has been of high momentum in the field of healthcare as well. In the year 2000, study conducted by Joseph Cairochi, Frank P. Deane and Stephen Anderson, Department of Psychology, University of Wollongong, Australia, hypothesized that EI would make a unique contribution to understanding the relationship between stress and three important mental health variables, depression, hopelessness, and suicidal ideation. This was a cross-sectional study where university students were required to evaluate their life-stress, objective and self-reported emotional intelligence, and mental health. One of the findings revealed that stress was associated with greater suicidal ideation among those low in managing others emotions (MOE). MOE was shown to be statistically different from other relevant measures, suggesting that EI is highly essential in understanding the link between stress and mental health. Emotional Intelligence and Alexithymia- Alexithymia- literally without words for emotions, in Greek- was a term originally coined by psychotherapist Peter Sifneos in 1973 in order to describe a state of deficiency in understanding, processing or describing emotions (Bar-On Parker, 2000, p40-59, Taylor ,1997, p28-31). Alexithymia is defined by many factors, such as, difficulty identifying feelings and distinguishing between feelings and the bodily sensations of emotional arousal; difficulty describing feelings to other people; constricted imaginal processes, as evidenced by a paucity of fantasies; a stimulus-bound, externally oriented cognitive style. (Taylor,1997, p29). Logically, one would expect an inverse relation between the constructs of alexithymia and emotional intelligence. This expectation has been supported in the literature. Schutte et al (1998) found that in a sample consisting of University students, a self-report measure of emotional intelligence (the Self Report Emotional Intelligence Test) was significantly inversely correlated with the Toronto Alexithymia Scale, which was used as the standard measure for alexithymia . Research with larger community samples has particularly found significant associations. For example, Parker, Taylor, and Bagby (2001) found a strong negative correlation between the Emotion Quotient Inventory and the TAS in a sample of 734 community members (Stys, Brown, 2004, A Review of the Emotional Intelligence- Literature and Implications for Corrections, 28). According to Johanna Vanderpol (n.d.)- author, speaker, coach and workshop provider in emotional intelligence and emotional well-being, Canada- art and play, which are forms of emotional expression, are the essential ways in which individuals, especially young children, expand their abilities and master their environment, further stating that emotional expression is but a part of developing emotional intelligence. One such study presented a series of experiential exercises designed to use visual arts and poetry in classroom settings to increase students awareness and recognition of emotion-two key components of emotional intelligence (Morris, Urbanski, Fuller, 2005). In a study titled Emotional Intelligence and the Performing Arts: Crossing Disciplinary Boundaries, an experiential training program that employed the Ability Model of emotional intelligence (Salovey and Mayer, 1990, 1997) was combined with performing arts and drama therapy to create a workshop program, whose aim was to increase the awareness of the role of emotions in working life, and provided interactive learning opportunities to engage with complicated emotional dilemmas arising from their leadership roles. Survey results from the workshops and a focus-group at three months follow-up revealed that participants used the learning experience of the workshop to address and resolve specific leadership challenges in their role (Rauk er, Skinner, Bett, 2009). The current study attempts to show a relationship between emotional intelligence and ones involvement in the Theatre Arts. Theatre, or Drama, as it is more commonly known, is the most integrative of all the arts: it can and often does, include singing, dancing, painting, sculpture, storytelling, puppetry, music, poetry and of course, the art of acting (Snow, DAmico, Tanguay, 2003, p73). Also it has been of wide contention that there is an innate healing function in theatre which goes all the way back to its origins in human culture (Bates, 1988; Emunah, 1994; McNiff, 1988; Pendzik, 1988; Snow, 1996). A wide range of study has been done on the influence of drama on psychological well being and the role it plays in psychotherapy, hence giving rise to the concept of Drama Therapy. Drama therapy is one of the several expressive or creative art therapies among which are art therapy, dance/movement therapy, music therapy, poetry therapy and psychodrama, concerning the therapist and the cli ent who attempt to evaluate their life experiences as they engage in a largely creative process, in this case through the media of drama and theatre (Landy, 2006, p135). One such drama therapy technique that has been studied is Dramatic Resonances. This method is based on the creative responses that participants offer from within dramatic reality to an input posed from outside dramatic reality (Pendzik, 2008, p217). Further, therapeutic theatre has been a growing field and which is an approach that involves a therapeutic development of a play and its presentation in front of an audience (Pendzik, 2008). It is defined as the therapeutic development of a play in which the roles are established with therapeutic goals in mind; the whole process of play production is, in fact, a form of group psychotherapy; it is all facilitated by a therapist skilled in drama; and finally the play must be performed for a public audience (Snow, DAmico, Tanguay, 2003, p73). However, according to Robert J. Landy, though the field of drama therapy has been growing by the numbers, university-based training programs in the USA are inadequate (Landy, 2006). This trend could be an indicator of a potential consequent decline in the study of this field. This paper aims to encourage a positive shift from such a trend and bring about a focus on an increasing awareness and attestation of the constructive relationship between Drama and emotional intelligence. Considering the significant research that has gone into the relationship between emotional well-being and the theatre arts, largely in the West, this study attempts to investigate the prevalence of a positive relationship between a thorough involvement in the Theatre Arts and emotional intelligence, among individuals residing in a theatre-active city in India. The study is conducted by means of a questionnaire that is based on the Emotional Intelligence Scale, as completed by a total of 120 individuals, all of whom reside in Bangalore, India, a city acclaimed for its active involvement in the theatre arts. Methodology Participants The study was conducted by means of a standardized questionnaire, viz. the Emotional Intelligence Scale (EIS), as completed by a total of 80 individuals, all of whom reside in Bangalore, India, a city acclaimed for its active involvement in the theatre arts. Of these 80 individuals, 40 belong to the control group. This group consists of individuals who have not been exposed to the theatre arts. Of these 40 individuals, 20 belong to the age group of 20-25 years (M age-group= 21.5) while the rest belong to the age group of 30-35 years (M age-group= 32). The experimental group consists of 40 individuals who have been active members of theatre associations across the city. Of these 40 individuals, 20 belong to an age group of 30-35 years (M age-group=32.5); while the rest belong to an age-group of 20-25 years (M age-group=21.5). Ethical concerns were met with, as the participants were informed of the purpose of the study, were made to sign a consent form before participating in the study and were assured of confidentiality. Materials The questionnaire used was a standardized Emotional Intelligence Scale developed by Anukool Hyde, Upender Dhar Sanjyot Pethe, in the year 2001, published by Vedant Publications, Lucknow and consisted of 34 questions based on the Likert scale, in a way that the participant was asked to respond to each statement-question by choosing one of the five options- Strongle Agree, Agree, Uncertain, Disagree, Strongly Disagree. Design This study fundamentally deals with two variables which are involvement in the theatre arts and emotional intelligence, the dependent variable being emotional intelligence and the independent variable being involvement in theatre arts. Of the 80 individuals, 40 belonged to the control group, consisting of individuals who have not been exposed to the theatre arts. The experimental group consists of 40 individuals who have been active members of theatre associations across the city. Of these 40 individuals, 20 belong to an age group of 30-35years and have had experience in one or more of the various aspects of theatre such as acting, directing, story-telling, music, etc for a minimum of 10 plays; while the rest belong to an age-group of 20-25 years and have similarly participated in a minimum of 5 plays so far. This division of age groups was employed with an aim to represent a growth in the groups emotional intelligence. Procedure The experimental group was obtained at an auditioning program held by Evam, a leading dramatics association in Bangalore, when 40 individuals, some who were auditioning and some organizing, were approached to on a one to one basis, and made to fill out the EIS questionnaire each. Demographic details as their age, sex and experience in theatre were taken. The control group consisted of randomly selected individuals who reside Bangalore, and have had no experience of involvement in the theatre arts. They were similarly made to fill out the EIS, along with their respective corresponding details. The entire study was conducted in one city in an attempt to maintain a certain consistency in obtaining the results, and minimising any potential disparity. Results With the raw scores obtained, the statistical analysis that followed included finding out the mean, standard deviation, standard error of the difference between the means of two samples and employing of a non-parametric test as the Mann-Whitney U test. In the results obtained for the Mann-Whitney U test, the z values of sampling distribution of U an U, 2 and 5.68 respectively, were found to be significant at both 0.05 and 0.01 levels. The average mean for the experimental group was 138.67, and for the control group was 129.65. For the experimental group, the value of standard deviation was found to be 8.83. For the control group, the SD obtained was 1.11. In determining the significance of the difference between the two means of the two groups, the standard error obtained was 2.10, for which the z value was found to be 4.29. Thus, the computed z value was found to be significant at both 1% and 5% significance levels. Further, the Mann-Whitney U test was employed to the subgroups under the experimental group in order to show a positive relation between the two. While the z value obtained for U was found to be 1.48, implying insignificant at 0.05 and 0.01 significance levels, the z value obtained for U was 7.85, which meant significant at both 0.05 as well as 0.01 significance levels. Discussion This paper has attempted fundamentally to study the symbiotic relationship between ones involvement in the theatre arts and their emotional intelligence, and how, with time and experience, an increasing involvement in the same renders one to develop greater EI, which in turn implies an increased accuracy in perceiving, appraising, managing and expressing emotions. As Cooper and Sawaf demonstrated in 1997, the characteristic manifests of a high EI include self-awareness, mood management, self motivation, empathy and managing relationships. Thus, through investigating the levels of emotional intelligence of the participating individuals, and inquiring into their experience in the theatre arts, the researcher has arrived at findings which show a positive relationship between the two variables. From examining the results obtained, some of the deductions are, that young adults who involve in the theatre arts as drama (acting), music, story-telling, and direction, tend to have a high emoti onal intelligence as compared to young adults who do not engage in any of the theatre arts; and that with time and experience these individuals could possibly have a propensity to a consistent growth in their EI, again, as compared to individuals of their age, who have had no inclination towards the theatre arts. These two findings could further imply that these individuals would be likely to have more rewarding, productive and successful lives. One more supposition which could be drawn from the results of this study is that these individuals could be liable to do better coping with the stress and setbacks, implying a lowered risk of heart disease, anxiety attacks, psychological distress, sleep problems, high blood pressure, poor immune function, alcoholism, etc (Mikolajczack, Luminet, Menil, 2006; Hunt, Evans, 2002; Trinidad, Johnson, 2000). However, there are some probable challenges that can be posed to these conclusions. The entire study was based in one single city, and the cha llenge in this case is that the theatre-culture may vary from city to city, just as from theatre-group to theatre-group. Therefore, generalizing the results would have to be limited only to the city where the study was conducted. Further, the study did not consider the role gender could play in the relationship between ones EI and their involvement in the theatre arts, as there was no categorization of the two sexes while conducting the study. This could, in fact, entail future experiment on whether gender plays a role in the development of EI, by way of thorough involvement in the theatre arts. Additionally, the researcher has considered the theatre arts as a whole, comprising of its various aspects such as acting, music, story-telling, and direction. The participants of the study belonging to these categories were distributed unequally. Thus, the results obtained in the study are required to be considered generically and cannot be taken into account categorically. Probably, furthe r research could be carried out to study the individual aspects, such as acting, alone, for example, and studying the aspects relationship with the participants emotional intelligence. One possible source of error and an intervening variable could have been the environment of administering the test and the mental set of the participant while filling out the questionnaire. It must be noted that the study was conducted at an auditioning program of a theatre group and that most of the participants of the study had only just finished their turn at the audition. It can be assumed that the mental set of the participant at this stage, could have possibly affected his or her responses in the test. In other words, the participants perception of his or her own performance at the audition, which could either have been positive and affirmative or negative and uncertain of his or her chances to be successful in the attempted task, is likely to have influenced the responses he or she provided in the Emotional Intelligence Scale. A possible remedy for this, to neutralize the effects of the performance at the audition, could be that the researcher could provide the participant with a time-gap of approximately half an hour, following which, the test could be administered, assuming that consequently, the participant is less likely to be influenced by the audition-performance while responding to the given test. In conclusion, this study has successfully investigated the issue it primarily aimed to, and in spite of the potential challenges faced in the deduction of its findings, it has proved the hypothesis that there is a positive relationship between ones involvement in the theatre arts and their emotional intelligence. The findings of the study entail further research in the vast area of psychological health and the creative arts, of which the theatre arts are an integral part, especially in India, as the current study was conducted with an aim to bring about an awareness in the Indian society, of the great advantages of the theatre arts and its positive relationship with psychological well-being.

Tuesday, November 12, 2019

Plot of Pride and Prejudice :: essays research papers

Mrs. Bennet is anxious to have her five daughters marry into well houses. When a rich single, man Charles Bingley, arrives nearby, she urges her husband to get to know him. The Bennets go to a ball in a town called Meryton, and are introduced to Charles Bingley. Everyone likes him but his friend, Fitzwilliam Darcy is found to be arrogant. Mr. Darcy doesn’t dance with anyone outside his â€Å"group,† and he says that Elizabeth Bennet is attractive, but not enough to tempt him. Mr. Bingley starts to admire Jane Bennet and his love deepens to the extent that Jane’s sisters and Mr. Darcy get concerned. Mr. Darcy is repelled by the family’s lower status and an embarrassing family. Mr. Darcy, however, still becomes interested in Elizabeth’s good-spirited character, and Mr. Bingley’s jealous disapproval do nothing to lessen Mr. Darcy’s interest. Caroline asks Jane to come to Netherfield. On the way there, Jane catches a cold and is forced to stay. Mrs. Bennet loves this information because she will use any means to push her daughter onto Mr. Bingley. Jane’s condition worsens, and Elizabeth goes to Netherfield instead. Her concern for her sister and her intellect interest Mr. Darcy even more, but he is afraid of falling in love with someone who is so much poorer. Mr. Bennet’s estate at Longbourn is supposed to go to Mr. Collins, a clergyman, because Mr. Bennet doesn’t have a son and Mr. Collins is the nearest male relative. Mr. Bennet sends his cousin on a chore to Meryton with his daughters. There they meet George Wickham, a handsome militia officer. At an evening party, Wickham tells Elizabeth his life story. Wickham’s story makes Darcy look arrogant and cruel, and Elizabeth start to have a prejudice against Mr. Darcy from then on. At another ball, Elizabeth resents Wickham’s absence. Later on she is also embarrassed by her family. Mrs. Bennet refuses to stop talking about what a good couple Jane and Mr. Bingley will make. On the other hand, Mary Bennet bores the whole company by trying to play the piano. Mr. Collins, suddenly, proposes to Elizabeth at the ball and she rejects. Mr. Collins doesn’t believe that Elizabeth is intently refusing him, but after Mr. and Mrs. Bennet explain it to him, he seems to understand. The whole Bingley party, all of a sudden, leaves Netherfield to go to London. Caroline Bingley writes to Jane that they don’t mean to return for the whole winter, and she tells her what a good couple Georgiana Darcy and Mr.

Sunday, November 10, 2019

Is the Death Penalty a Deterrent? Essay

No other topic in the field of corrections receives more attention than the death penalty (del Carmen). The United States is one of the few democracies in the world that still imposes a punishment of death, much due to the strength of public opinion. Since 1936, the Gallup Poll revealed only one year (1966) in which a minority of the population favored capital punishment, with only 45 percent support. Support has remained fairly constant at around 70 percent through the year 2000 (National Opinion Research Center). Many supporters’ arguments for the death penalty derive from the deterrence hypothesis, which suggests that in order to encourage potential murderers to avoid engaging in criminal homicide, society needs capital punishment. In other words, â€Å"states with the death penalty should have lower homicide rates than states without the death penalty† (Void, Bernard, and Snipes 201). In 2000, 42 percent of the United States population felt the death penalty acts as a deterrent to other potential murderers (National Opinion Research Center). Scholars have long believed that if the public were more knowledgeable on the death penalty and its effects, support would not be so high (Shelden). Former Supreme Court Justice Thurgood Marshall, in his concurring opinion in the case of Furman v. Georgia (1972), stated that American citizens know almost nothing about capital punishment. Further, in what has become known as the â€Å"Marshall Hypothesis,† he stated that â€Å"the average citizen† who knows â€Å"all the facts presently available regarding capital punishment would†¦ find it shocking to his conscience and sense of justice† (Walker, Spohn, and DeLone 230). For example, a Gallup poll was given asking whether respondents supported the death penalty, then asked if they would support it if there were proof that the deterrence theory was incorrect. Twenty-four percent of the respondents showed a change in their support of capital punishment (Radelet and Akers). Background Capital punishment in the United States has gone though periods in which most states either abolished it altogether or never used it, and periods in which it was commonly used (Shelden). The landmark Supreme Court decisions of Furman v. Georgia (1972) and Gregg v. Georgia (1976) rekindled the longstanding controversy surrounding capital punishment (Shelden). In Furman v. Georgia, the Court found that the death penalty, as it was currently being administered, constituted â€Å"cruel and unusual punishments†, in violation of the Eighth and Fourteenth Amendments to the United States Constitution. This decision suspended all capital punishment in the United States, however, left leeway for states to revise their current practices. Appeals began flowing through the Court and within four years of Furman, the Court made perhaps its most significant ruling on the matter (Shelden). In the case of Gregg v. Georgia (1976), the Court ruled, â€Å"A punishment must not be excessive, but this does not mean that the states must seek the minimal standards available. The imposition of the death penalty for the crime of murder does not violate the Constitution. † The moratorium was lifted and a path cleared for the first execution to take place in ten years. After a de facto abolition of capital punishment, it was reinstated in 1977 with the execution of Gary Gilmore by a firing squad in Utah (Shelden). Currently, 38 states, the federal government, and the United States military continue to execute those convicted of capital murder. Illinois and Maryland have moratoriums placed on the death penalty in their jurisdictions (Death Penalty Information Center). As recent as 2000, a number of jurisdictions in the United States have questioned the fairness and effectiveness of the death penalty. For instance, in January of 2000, Governor George Ryan of Illinois declared a moratorium on all executions after the state had released thirteen innocent inmates from death row in the same time it had executed twelve. Ryan then appointed a blue-ribbon Commission on Capital Punishment to study the issue in greater detail. On January 10, 2003, Ryan pardoned four death row inmates after lengthy investigations revealing abuse of defendants’ rights, including torture during interrogation (Death Penalty Information Center). The following day (also his last day in office) Ryan granted clemency to all of the remaining 156 death row inmates in Illinois, as a result of the flawed process that led to these sentences. According to the Death Penalty Information Center, â€Å"Ryan’s decision to grant today’s commutations reflects his concern that Illinois’ death penalty system lacked uniform standards designed to avoid arbitrary and inappropriate death sentences. † It should be noted that the 156 clemencies did not result in the release of the inmates, since many still face life in prison. Deterrence Theory According to Siegel, deterrence is defined as â€Å"the act of preventing a crime before it occurs by means of the threat of criminal sanctions; deterrence involves the perception that the pain of apprehension and punishment outweighs any chances of criminal gain or profit† (616). The theory of deterrence stemmed from the work of Cesare Beccaria, who has been known as â€Å"the leader of the classical school of thought† (del Carmen 21). Beccaria received a degree from the University of Pavia in Italy in 1758. Upon graduating, he embarked on working as a mathematician, but soon became interested in politics and economics. Beccaria met regularly with Allessandro Verri, an official of the prison in Milan, and his brother Pietro Verri, an economist, in a group of young men who met to discuss philosophical and literary topics (Void, Bernard, and Snipes). In March 1763, Beccaria was given the responsibility of writing an essay on the topic of penology. With little knowledge in the field, he went to the Verri brothers for assistance and drafted the essay. In 1764, his influential essay, On Crimes and Punishments was published (del Carmen). He listed ten principles proposing various reforms to make criminal justice practices more logical and rational (Void, Bernard, and Snipes). Becarria’s work is known to be one of the first wails for reform in the treatment of criminals. His concept that â€Å"the punishment should fit the crime,† was a major contribution to the classical school of thought. Beccaria felt severe punishment was not necessary and the only reason to punish was to assure the continuance of society and to deter others from committing crimes. Further, deterrence stemmed from appropriate, prompt, and inevitable punishment, rather than severe punishment. Regarding the death penalty, Beccaria believed it did not deter others and was an act of brutality and violence by the state (del Carmen). Finally, in one of Beccaria’s ten recommendations he argued that punishments that include excessive severity not only fail to deter crime, but actually increase it (Void, Bernard, and Snipes). The theory of deterrence was neglected for about a century. Then, in 1968, criminologists sparked an emergence of interest when Jack P. Gibbs published the first study that attempted to test the deterrence hypothesis (Void, Bernard, and Snipes). The certainty of punishment was defined, by Gibbs, as the ratio between the number of prisoners admitted for a given year and the number of crimes known to police in the prior year. Gibbs defined severity of punishment as the mean number of months served by all persons convicted of a given crime who were in prison in that year. His research found that greater certainty and severity were associated with fewer homicides for the year 1960. Gibbs concluded that both certainty and severity of imprisonment might deter homicide. Charles R. Tittle analyzed similar statistics regarding certainty and severity of punishment for the seven â€Å"index offenses† in the FBI Uniform Crime Reports (Void, Bernard, and Snipes). Tittle concluded that the certainty of imprisonment deters crime, but that severity only deters crime when certainty is quite high (Void, Bernard, and Snipes). In 1978, the National Academy of Sciences produced a report that concentrated on previous deterrence research and found that more evidence favored a deterrent effect than evidence that was against it. Void, Bernard, and Snipes stated that the deterrent effectiveness of the death penalty is probably the single most researched topic in the area of criminology. In 1998, Daniel Nagin reviewed studies of deterrence and argued that deterrence research has evolved into three types of literature. Of the three, one of these types identified examines criminal justice policies in varying jurisdictions and the crime rate affiliated with the policies to determine if there is a deterrent effect. Void, Bernard, and Snipes recognized that a large number of studies have been conducted regarding this issue; however the results have been inconclusive. For example, the deterrence hypothesis implies that death penalty states should have lower homicide rates than states without the death penalty. As Gibbs and Tittle’s research showed, however, death penalty states have considerably higher murder rates than non-death penalty states. Void, Bernard, and Snipes conclude that, more than likely, this results from states implementing the death penalty due to higher murder rates. Radelet and Akers state that because of little empirical support for general deterrence and the death penalty, most criminologists have concluded that capital punishment does not reduce crime. Furthermore, several researchers have found that the death penalty actually increases homicides (Bailey). Thorsten Sellin, one of the leading authorities on capital punishment, has suggested that if the death penalty deters prospective murderers, the following hypothesis should be true: (a) Murders should be less frequent in states that have the death penalty than in those that have abolished it, other factors being equal. Comparisons of this nature must be made among states that are as alike as possible in all other respects – character of population, social and economic condition, etc. – in order not to introduce factors known to influence murder rates in a serious manner but presently in only one of these states. (b) Murders should increase when the death penalty is abolished and should decline when it is restored. (c) The deterrent effect should be greatest and should therefore affect murder rates most powerfully in those communities where the crime occurred and its consequences are most strongly brought home to the population. (d) Law enforcement officers would be safer from murderous attacks in states that have the death penalty than in those without it. Sellin’s research indicates that not one of these conjectures is true. Further, his statistics illustrate that there is no correlation between the murder rate and the presence or absence of capital crimes. For example, Sellin compares states with similar characteristics and finds that regardless of the state’s position on capital punishment, they have similar murder rates. Finally, Sellin’s study concluded that abolition and/or reintroduction of the death penalty had no significance on the homicide rates of the various states involved. Summary The death penalty has long been one of the most debated issues in the American justice system. Most advocates claim that the punishment protects society by deterring murderers from repeatedly committing their crimes. Additionally, proponents proclaim that criminals have a better chance of choosing not to commit murder if the death penalty is a possible sanction. On the other end, opponents of the death penalty argue that no study has convincingly shown enough evidence of such a deterrent effect. In fact, they argue that most studies have not only shown the lack of a deterrent effect, but have conversely suggested that punishment by death might even have a brutalization effect. In other words, they suggest that criminal executions brutalize society by legitimating the killing of human beings, which ultimately leads to an increase in the rates of criminal homicide. Deterrence basically refers to the ideology that punishing persons who commit crime prevents other similarly disposed individuals from doing so. There are two existing types of deterrence, specific and general. Death penalty proponents argue for the importance of specific deterrence and its preventive effect in protecting society from a second crime from the same offender, who could easily evade or be released while imprisoned. In other words, this simply means that the death penalty takes away the opportunity for the offender to commit murder again. This type of deterrence obviously only deters the concerned offender. In this case, it is certain that punishment by death acts as a specific deterrent in 100% of the cases since a deceased offender will never have the opportunity to recidivate. As for general deterrence, it assumes that the thought of the death penalty as a potential cost of offending acts as a form of dissuasion. It is believed that punishment by death is considered by offenders when they are committing their acts, which would then convince them to not act and therefore result in a lesser probability of them committing their crimes. Additionally, proponents of the death penalty argue that such a punishment is the only solution to deter imprisoned offenders from killing other inmates or guardians while incarcerated. Without the death penalty as a possible sanction, a murderer incarcerated for life would not have anything to lose by killing again.With the death penalty as a possibility, the inmate has his life to lose. Works Cited Bailey, William C. â€Å"Deterrence, Brutalization, and the Death Penalty. † Criminology 36. 4 (1998): 711-33. Cockburn, Alexander. â€Å"Hate Versus Death. † Nation 272, 10 (2001): 9-11. Death Penalty Information Center. What’s New, 2008 del Carmen, Alejandro. Corrections. Madison, Wise: Coursewise Publishing, 2000 Chiricos, Theodore G. and Gordon P. Waldo. â€Å"Punishment and Crime: An Examination of Some Empirical Evidence. † Social Forces 18. 2 (1970): 200-17.

Friday, November 8, 2019

Bus Reservation System Essays

Bus Reservation System Essays Bus Reservation System Paper Bus Reservation System Paper A PROJECT REPORT ON BUS RESERVATION SYSTEM Submitted in partial fulfillment for the Award of degree of Post Graduate Diploma In Information Technology (2008-10) Submitted By: BRIJ MOHAN DAMMANI 200852200 Submitted to: Symbiosis Centre for Distance Learning, Pune 411016, Maharashtra, India ACKNOWLEDGEMENT A project like this takes quite a lot of time to do properly. As is often the case, this project owes its existence and certainly its quality to a number of people, whose name does not appear on the cover. Among them is one of the most extra ordinary programmers it has been my pleasure to work with Mr. Ankur Kaushik, who did more than just check the facts by offering thoughtful logic where needed to improve the project as a whole. We also thank to Mr. Sh. Hardayal Singh (H. O. D. -MCA Deptt. Engineering College Bikaner) who deserves credit for helping me done the project and taking care of all the details that most programmers really don’t think about. Errors and confusions are my responsibility, but the quality of the project is to their credit and we can only thank them. We are highly thankful and feel obliged to Milan Travels staff members for nice Co-Operation and valuable suggestions in my project work. We owe my obligation to my friends and other colleagues in the computer field for their co-operation and support. We thank God for being on my side. Contents Chapter 1Introduction Chapter 2 Development model Chapter 3System Study Chapter 4Project Monitoring System Chapter 5System Analysis Chapter 6Operating Environment Chapter 7System Design Chapter 8System Testing Chapter 9System Implementation Chapter 10Conclusion Chapter 11Scope of the Project Introuction In bus reservation system there has been a collection of buses, agent who are booking tickets for customer’s journey which give bus number and departure time of the bus. According to its name it manages the details of all agent, tickets, rental details, and timing details and so on. It also manages the updating of the objects. In the tour detail there is information about bus, who has been taking customers at their destination, it also contain the detailed information about the customer, who has been taken from which bus and at what are the number of members he or she is taking his/her journey. This section also contain the details of booking time of the seat(s) or collecting time of the tickets, this section also contain the booking date and the name of agent which is optional, by which the customer can reserve the seats for his journey In Bus no category it contains the details of buses which are old/new. New buses are added with the details with bus no, from city to the city, type of the bus, rent of a single seat, if the bus has sleeper than the cost of sleeper, if the cabin has the facility for sitting than the cost of cabin seats, tour timings of the new bus has also been stored. How many buses are currently given and available in office? In seats specification, it gives the list of given issued and currently available seats and contain the information about seats like sleeper, cabin etc. The main objective of this project is to provide the better work efficiency, security, accuracy, reliability, feasibility. The error occurred could be reduced to nil and working conditions can be improved. Development model Software Process Model Our project life cycle uses the waterfall model, also known as classic life cycle model or linear sequential model. The Waterfall Model The waterfall model encompasses the following activities: 1. System/information Engineering and Modeling System Engineering and Analysis encompass requirements gathering at the system level with a small amount of Top-level design and analysis. Information Engineering encompasses requirements gathering at the strategic business level and at the business area level. 2. Software requirements analysis Software requirements analysis involves requirements for both the system and the software to be document and reviewed with the customer. . Design Software design is actually a multi-step process that focuses on for distinct attributes of a program: data structure, software architecture, interfaces representation and procedural detail. The design process translates requirements into a representation of the software that can be accessed for quality before coding begins. 4. Code Generation Code-Generation phase translates the design into a machine-readable form. 5. Testing Once code has been generated, program testing begins. The testing focuses on the logical internals of the software, ensuring that all statement have been tested, and on the functional externals; that is, conducting test to uncover errors and ensure that define input will produce actual results that agree with required results. 6. Support Software will undoubtedly undergo change after it is delivered to the customer. Change will occur because errors have been encountered, because the software must be adapted to accommodate changes in its external environment or because the customer requires functional or performance enhancements. System Study Before the project can begin, it becomes necessary to estimate the work to be done, the resource that will be required, and the time that will elapse from start to finish. During making such a plan we visited site many more times. 3. 1 Project planning objectives The objective of software project planning is to provide a framework that enables the management to make reasonable estimates of resources, cost, and schedule. These estimates are made within limited time frame at the beginning of a software project and should be updated regularly as the project progresses. In addition, estimates should attempt to define best case and worst case scenarios so that project outcomes can be bounded. 3. 2 Software Scope The first activity in software project planning is the determination of software scope. Software scope describes the data and control to be processed, function, performance, constraints, interfaces, and reliability. 3. 2. 1 Gathering Information Necessary for Scope The most commonly used technique to bridge communication gap between customer and the software developer to get the communication process started is to conduct a preliminary meeting or interview. When I visited the site we have been introduced to the Manager of the center, there were two other persons out of one was the technical adviser and another one was the cost accountant. Neither of us knows what to ask or say; we were very much worried that what we say will be misinterpreted. We started to asking context-free questions; that is, a set of questions that will lead to a basic understanding of the problem. The first set of context-free questions was like this: What do you want to be done? Who will use this solution? What is wrong with your existing working systems? Is there another source for the solution? Can you show us (or describe) the environment in which the solution will be used? After first round of above asked questions. We revisited the site and asked many more questions considering to final set of questions. Are our questions relevant to the problem that you need to be solved? Are we asking too many questions? Should we be asking you anything else? 3. 2. 2 Feasibility Not everything imaginable is feasible, not even in software. Software feasibility has four dimensions: Technology- is a project technically feasible? Is it within the state of the art? Finance – Is it financially feasible? Time- will the project be completed within specified time? Resources- does the organization have the resources needed to succeed? After taking into consideration of above said dimensions, we found it could be feasible for us to develop this project. 3. 3 Software Project Estimation Software cost and effort estimation will never be an exact science. Too may variables- human, technical, environmental, political- can affect the ultimate cost of software and effort applied to develop it. However, software project estimation can be transformed a black art to a series of systematic steps that provide estimates with acceptable risk. To achieve reliable cost and effort estimates, a number of options arise: 1. Delay estimation until late in the project (since, we can achieve 100% accurate estimates after the project is complete! ) 2. Base estimates on similar projects that have already been completed. 3. Use relatively simple decomposition techniques to generate project cost and effort estimates. 4. Use one or more empirical models for software cost and effort estimation. Unfortunately, the first option, however attractive, is not practical. Cost estimates must be provided â€Å"Up front†. However, we should recognize that the longer we wait, the more we know, and the more we know, the less likely we are to make serious errors in our estimates. The second option can work reasonably well, if the current project is quite similar to past efforts and other project influences (e. g. , the customer, business conditions, the SEE, deadlines) are equivalent. Unfortunately past experience has not always been a good indicator of future results. The remaining options are viable approaches the software project estimation. Ideally, the techniques noted for each option be applied in tandem; each used as cross check for the other. Decomposition techniques take a â€Å"divide and conquer† approach to software project estimation. By decomposing a project into major functions and related software engineering activities, cost and effort estimation can be performed in the stepwise fashion. Empirical estimation models can be used to complement decomposition techniques and offer a potentially valuable estimation approach in their own right. A model based on experience (historical data) and takes the form D = f (vi) Where d is one of a number of estimated values (e. g. , effort, cost, project duration and we are selected independent parameters (e. g. , estimated LOC (line of code)). Each of the viable software cost estimation options is only as good as the historical data used to seed the estimate. If no historical data exist, costing rests on a very shaky foundation. Project Monitoring System 4. 1 PERT Chart: Program evaluation and review technique (PERT) and critical path method (CPM) are two project scheduling methods that can be applied to software development. These techniques are driven by following information: Estimates of Effort A decomposition of the product function The selection of the appropriate process model and task set Decomposition of tasks PERT chart for this application software is illustrated in figure 3. 1. The critical Path for this Project is Design, Code generation and Integration and testing. Figure 4. 1 PERT charts for â€Å"Bus Reservation System†. 4. 2 Gantt Chart: Gantt chart which is also known as Timeline chart contains the information like effort, duration, start date, completion date for each task. A timeline chart can be developed for the entire project. Below in figure 4. 2 we have shown the Gantt chart for the project. All project tasks have been listed in the left-hand column. Start: May 17, 2010. Work tasksPlanned startActual startPlanned completeActual CompleteNotes 1. 1Identify needs and benefits Meet with customers Identified needs and constraints Established Product Statement Milestone: Product statement defined 1. 2Defined Desiredoutput/control/input (OCI) Scope modes of interacton Documented (OCI) FTR: reviewed OCI with customer Revised OCI as required Milestone: OCI defined 1. 3Defined the function/behavior Milestone: Data Modeling completed . 4Isolation software elements Coding Reports 1. 5 Integration and Testing Wk1,d1 Wk1,d2 Wk1,d3 Wk1,d3 Wk2,d1 Wk2,d1 Wk3,d3 Wk4,d1 Wk4,d3 Wk5,d1 Wk5,d1 W9,d1 Wk1,d1 Wk1,d2 Wk1,d3 Wk1,d3 Wk5,d2 Wk6,d1 Wk7,d6 W9,d3 Wk1,d2 Wk1,d2 Wk1,d3 Wk1,d3 Wk2,d2 Wk2,d3 Wk3,d5 Wk4,d2 Wk4,d5 Wk1,d2 Wk1,d2 Wk1,d3 Wk1,d3 Wk5,d5 W7,d5 W8,d6 W11,d3 Analysis and design is more time consuming. Finish: Aug 15 , 2010 Figure: 4. 2 Gant chart for the Bus reservation System. Note: Wk1- week1, d1- day1. System Analysis Software requirements analysis is a process of discovery, refinement, modeling, and specification. Requirement analysis proves the software designer with a representation of information, function, and behavior that can be translated to data, architectural interface, and component -level designs. To perform the job properly we need to follow as set of underlying concepts and principles of Analysis. 5. 1 Analysis Principles Over the past two decades, a large number of analysis modeling methods have been developed. Investigators have identified analysis problems and their caused and have developed a variety of modeling notations and corresponding ets of heuristics to overcome them. Each analysis method has a unique point of view. However, all analysis methods are related by a set of operational principles: 1. The information domain of a problem must be represented and understood. 2. The functions that the software is to perform must be defined. 3. The behavior of the software (as a consequence of external events) must be represented. 4. The models that depict information function and behavior must be partitioned in a manner that uncovers detail in layered (or hierarchical) fashion. 5. The analysis process should move from essential information toward implementation detail. By applying these principles, we approach the problem systematically. The information domain is examined so that function may be understood more completely. Models are used so that the characteristics of function and behavior can be communicated in a compact fashion. Partitioning is applied to reduce complexity. Essential and implementation vies of the software are necessary to accommodate the logical constraints imposed any processing requirements and the physical constraints imposed by other system elements. We have tried to takes above said principles to heart so that we could provide an excellent foundation for design. 5. 1. 1 The Information Domain All software applications can be collectively called data processing. Software is built to process data, to transform data from one form to another; that is, to accept input, manipulate it in some way, and produce output. This fundamental statement of objective is true whether we build batch software for a payroll system or real-time embedded software to control fuel flow to an automobile engine. The first operational analysis principle requires an examination of the information domain and the creation of a data model. The information domain contains three different views of the data and control as each is processed by a computer program: (1)information contend and relationships (the data model) (2)information flow, and (3)Information structure. To fully understand the information domain, each of these views should be considered. Information content represents the individual data and control objects that constitute some larger collection of information transformed by the software. For example, the data object, Status declare is a composite of a number of important pieces of data: the aircraft’s name, the aircraft’s model, ground run, no of hour flying and so forth. Therefore, the content of Status declares is defined by the attributes that are needed to create it. Similarly, the content of a control object called System status might be defined by a string of bits. Each bit represents a separate item of information that indicates whether or not a particular device is on-or off-line. Data and control objects can be related to other data and control objects. For example, the date object Status declare has one or more relationships with the objects like total no of flying, period left for the maintenance of aircraft an others. Information flow represents the manner in which date and control change as each moves through a system. Referring to figure 6. 1, input objects are transformed to intermediate information (data and / or control), which is further transformed to output. Along this transformation path, additional information may be introduced from an existing date store ( e. g. , a disk file or memory buffer). The transformations applied to the date are functions or sub functions that a program must perform. Data and control that move between two transformations define the interface for each function. Figure 5. 1 Information flow and transformation. 5. 1. 2 Modeling The second and third operational analysis principles require that we build models of function and behavior. Functional models. Software transforms information, and in order to accomplish this, it must perform at lease three generic functions: Input Processing And output. The functional model begins with a single context level model (i. e. , the name of the software to be built). Over a series of iterations, more and more functional detail is gathered, until a through delineation of all system functionality is represented. Behavioral models. Most software responds to events from the outside world. This stimulus/response characteristic forms the basis of the behavioral model. A computer program always exists in some state- an externally observable mode of behavior (e. g. , waiting, computing, printing, and polling) that is changed only when some even occurs. For example, in our case the project will remain in the wait state until: We click OK command button when first window appears An external event like mouse click cause an interrupt and consequently main window appears by asking the username and password. This external system (providing password and username) signals the project to act in desired manner as per need. A behavioral model creates a representation of the states of the software and the events that cause software to change state. 5. 1. 2 Partitioning (Divide) Problems are often too large and complex to be understood as a whole, for this reason, se tend to partition (divide) such problems into parts that can be easily under stood and establish interfaces between the part so that overall function can be accomplished. The fourth operational analysis principle suggests that the information, functional, and behavioral domains of software can be partitioned. In essence, partitioning decomposes problem intoits constituent parts. Conceptually, we establish a hierarchical representation of function or information and then partition and uppermost element by 1)exposing increasing detail by moving vertically in the hierarchy or (2)Functionally decomposing the problem my moving horizontally in the hierarchy. To issulstate these partitioning approaches let us consider our project â€Å"Bus Reservation System†. Horizontal partitioning and vertical partitioning of Bus Reservation system is shown below. Horizontal partitioning: Bus Reservation Syst em System configurationPassword acceptanceInteract with user During installation, the software (Bus Reservation System) used to program and configure the system. A master password is programmed for getting in to the software system. After this step only user can work in the environments (right cornor naming operation, administration and maintenance) only. Vertical partitioning of Bus Reservation System function: Bus Reservation System Configure systemUsername and Password AcceptanceRejection Interact with userFail Retry Operating Environment 6. 1 Hardware Specification: Server Side: Core 2 Due 2. 4GHz and Above 2 GB of Random Access Memory and Above 160 GB Hard Disk Client Side: Pentium-IV 1. 5MHs and Above 512 MB of Random Access Memory and Above 80 GB Hard Disk Software Specification: Environment: . NET Framework 3. Technologies: ASP. NET, C# Database: MS Access Software: Visual Studio 2008, Notepad ++ OS: Windows server 2003 R2, Windows XP SP2 Browser: IE7, IE8, FF 3. 5 6. 2. 1 Front-end Environment (. NET Framework) The Internet revolution of the late 1990s represented a dramatic shift in the way individuals and organizations communicate with each other. Traditional applications, such as word processors and accounting packages, are modeled as stand-alone applications: they offer users the capability to perform tasks using data stored on the system the application resides and executes on. Most new software, in contrast, is modeled based on a distributed computing model where applications collaborate to provide services and expose functionality to each other. As a result, the primary role of most new software is changing into supporting information exchange (through Web servers and browsers), collaboration (through e-mail and instant messaging), and individual expression (through Web logs, also known as Blogs, and e-zines - Web based magazines). Essentially, the basic role of software is changing from providing discrete functionality to providing services. The . NET Framework represents a unified, object-oriented set of services and libraries that embrace the changing role of new network-centric and network-aware software. In fact, the . NET Framework is the first platform designed from the ground up with the Internet in mind. Microsoft . NET Framework is a software component that is a part of several Microsoft Windows operating systems. It has a large library of pre-coded solutions to common programming problems and manages the execution of programs written specifically for the framework. The . NET Framework is a key Microsoft offering and is intended to be used by most new applications created for the Windows platform. Benefits of the . NET Framework The . NET Framework offers a number of benefits to developers: ? A consistent programming model ? Direct support for security ? Simplified development efforts ? Easy application deployment and maintenance The . NET Class Library is a key component of the . NET Framework - it is sometimes referred to as the Base Class Library (BCL). The . NET Class Library contains hundreds of classes you can use for tasks such as the following: Processing XML Working with data from multiple data sources Debugging your code and working with event logs Working with data streams and files Managing the run-time environment Developing Web services, components, and standard Windows applications Working with application security Working with directory services The functionality that the . NET Class Library provides is available to all . NET language s, resulting in a consistent object model regardless of the programming language developer’s use. Elements of the . NET Framework The . NET Framework consists of three key elements as show in below diagram Components of the . NET Framework ?Common Language Runtime ?. NET Class Library ?Unifying components 1. Common Language Runtime The Common Language Runtime (CLR) is a layer between an application and the operating system it executes on. The CLR simplifies an applications design and reduces the amount of code developers need to write because it provides a variety of execution services that include memory management, thread management, component lifetime management, and default error handling. The CLR is also responsible for compiling code just before it executes. Instead of producing a binary representation of your code, as traditional compilers do, . NET compilers produce a representation of your code in a language common to the . NET Framework: Microsoft Intermediate Language, often referred to as IL. When your code executes for the first time, the CLR invokes a special compiler called a Just In Time (JIT) compiler, Because all . NET languages have the same compiled representation, they all have similar performance characteristics. This means that a program written in Visual Basic . NET can perform as well as the same program written in Visual C++ . NET. 2. NET Class Library The . NET Class Library containing hundreds of classes that model the system and services it provides. To make the . NET Class Library easier to work with and understand, its divided into namespaces. The root namespace of the . NET Class Library is called System, and it contains core classes and data types, such as Int32, Object, Array, and Console. Secondary namespaces reside within the System namespace. Examples of nested namespaces include the following: System. Diagnostics: Contains classes for working with the Event Log System. Data: Makes it easy to work with data from multiple data sources System. IO: Contains classes for working with files and data streams The benefits of using the . NET Class Library include a consistent set of services available to all . NET languages and simplified deployment, because the . NET Class Library is available on all implementations of the . NET Framework. 3. Unifying components Until this point, this chapter has covered the low-level components of the . NET Framework. The unifying components, listed next, are the means by which you can access the services the . NET Framework provides: ASP. NET Windows Forms Visual Studio . NET ASP. NET After the release of Internet Information Services 4. 0 in 1997, Microsoft began researching possibilities for a new web application model that would solve common complaints about ASP. . ASP. NET introduces two major features: Web Forms and Web Services. 1. Web Forms Developers not familiar with Web development can spend a great deal of time, for example, figuring out how to validate the e-mail address on a form. You can validate the information on a form by using a client-side script or a server-side script. Deciding which kind of script to use is complicated by the fact that each approach has its benefits and drawbacks, some of which arent apparent unless youve done substantial design work. If you validate the form on the client by using client-side JScript code, you need to take into consideration the browser that your users may use to access the form. Not all browsers expose exactly the same representation of the document to programmatic interfaces. If you validate the form on the server, you need to be aware of the load that users might place on the server. The server has to validate the data and send the result back to the client. Web Forms simplify Web development to the point that it becomes as easy as dragging and dropping controls onto a designer (the surface that you use to edit a page) to design interactive Web applications that span from client to server. 2. Web Services A Web service is an application that exposes a programmatic interface through standard access methods. Web Services are designed to be used by other applications and components and are not intended to be useful directly to human end users. Web Services make it easy to build applications that integrate features from remote sources. For example, you can write a Web Service that provides weather information for subscribers of your service instead of having subscribers link to a page or parse through a file they download from your site. Clients can simply call a method on your Web Service as if they are calling a method on a component installed on their system - and have the weather information available in an easy-to-use format that they can integrate into their own applications or Web sites with no trouble. Introducing ASP. NET ASP. NET, the next version of ASP, is a programming framework that is used to create enterprise-class Web applications. The enterprise-class Web applications are accessible on a global basis, leading to efficient information management. However, the advantages that ASP. NET offers make it more than just the next version of ASP. ASP. NET is integrated with Visual Studio . NET, which provides a GUI designer, a rich toolbox, and a fully integrated debugger. This allows the development of applications in a What You See is What You Get (WYSIWYG) manner. Therefore, creating ASP. NET applications is much simpler. Unlike the ASP runtime, ASP. NET uses the Common Language Runtime (CLR) provided by the . NET Framework. The CLR is the . NET runtime, which manages the execution of code. The CLR allows the objects, which are created in different languages, to interact with each other and hence removes the language barrier. CLR thus makes Web application development more efficient. In addition to simplifying the designing of Web applications, the . NET CLR offers many advantages. Some of these advantages are listed as follows. Improved performance: The ASP. NET code is a compiled CLR code instead of an interpreted code. The CLR provides just-in-time compilation, native optimization, and caching. Here, it is important to note that compilation is a two-stage process in the . NET Framework. First, the code is compiled into the Microsoft Intermediate Language (MSIL). Then, at the execution time, the MSIL is compiled into native code. Only the portions of the code that are actually needed will be compiled into native code. This is called Just In Time compilation. These features lead to an overall improved performance of ASP. NET applications. Flexibility: The entire . NET class library can be accessed by ASP. NET applications. You can use the language that best applies to the type of functionality you want to implement, because ASP. NET is language independent. Configuration settings: The application-level configuration settings are stored in an Extensible Markup Language (XML) format. The XML format is a hierarchical text format, which is easy to read and write. This format makes it easy to apply new settings to applications without the aid of any local administration tools. Security: ASP. NET applications are secure and use a set of default authorization and authentication schemes. However, you can modify these schemes according to the security needs of an application. In addition to this list of advantages, the ASP. NET framework makes it easy to migrate from ASP applications. Creating an ASP. NET Application After youve set up the development environment for ASP. NET, you can create your first ASP. NET Web application. You can create an ASP. NET Web application in one of the following ways: Use a text editor: In this method, you can write the code in a text editor, such as Notepad, and save the code as an ASPX file. You can save the ASPX file in the directory C:inetpubwwwroot. Then, to display the output of the Web page in Internet Explorer, you simply need to type http://localhost/. aspx in the Address box. If the IIS server is installed on some other machine on the network, replacelocalhost with the name of the server. If you save the file in some other directory, you need to add the file to a virtual directory in the Default WebSite directory on the IIS server. You can also create your own virtual directory and add the file to it. Use the VS. NET IDE: In this method, you use the IDE of Visual Studio . NET to create a Web page in a WYSIWYG manner. Also, when you create a Web application, the application is automatically created on a Web server (IIS server). You do not need to create a separate virtual directory on the IIS server. Characteristics Pages ASP. NET pages, known officially as web forms, are the main building block for application development. Web forms are contained in files with an ASPX extension; in programming jargon, these files typically contain static (X)HTML markup, as well as markup defining server-side Web Controls and User Controls where the developers place all the required static and dynamic content for the web page. Additionally, dynamic code which runs on the server can be placed in a page within a block which is similar to other web development technologies such as PHP, JSP, and ASP, but this practice is generally discouraged except for the purposes of data binding since it requires more calls when rendering the page. Note that this sample uses code inline, as opposed to code behind. protected void Page_Load(object sender, EventArgs e) { Label1. Text = DateTime. Now. ToLongDateString(); } Sample page The current time is: Code-behind model It is recommended by Microsoft for dealing with dynamic program code to use the code-behind model, which places this code in a separate file or in a specially designated script tag. Code-behind files typically have names like MyPage. aspx. cs or MyPage. aspx. vb based on the ASPX file name (this practice is automatic in Microsoft Visual Studio and other IDEs). When using this style of programming, the developer writes code to respond to different events, like the page being loaded, or a control being clicked, rather than a procedural walk through the document. ASP. NETs code The above tag is placed at the beginning of the ASPX file. The CodeFile property of the @ Page directive specifies the file (. cs or . b) acting as the code-behind while the Inherits property specifies the Class the Page derives from. In this example, the @ Page directive is included in SamplePage. aspx, then SampleCodeBehind. aspx. cs acts as the code-behind for this page: using System; namespace Website { public partial class SampleCodeBehind : System. Web. UI. Page { protected override void Page_Load(EventArgs e) { base. OnLoad(e); } } } In this case, the Page_Load () method is called every time the ASPX page is requested. The programmer can implement event handlers at several stages of the page execution process to perform processing. User controls ASP. NET supports creating reusable components through the creation of User Controls. A User Control follows the same structure as a Web Form, except that such controls are derived from the System. Web. UI. UserControl class, and are stored in ASCX files. Like ASPX files, a ASCX contains static HTML or XHTML markup, as well as markup defining web control and other User Controls. The code-behind model can be used. Programmers can add their own properties, methods, and event handlers. An event bubbling mechanism provides the ability to pass an event fired by a user control up to its containing page. Template engine When first released, ASP. NET lacked a template engine. Because the . NET framework is object-oriented and allows for inheritance, many developers would define a new base class that inherits from System. Web. UI. Page, write methods here that render HTML, and then make the pages in their application inherit from this new class. While this allows for common elements to be reused across a site, it adds complexity and mixes source code with markup. Furthermore, this method can only be visually tested by running the application not while designing it. Other developers have used include files and other tricks to avoid having to implement the same navigation and other elements in every page. ASP. NET 2. 0 introduced the concept of master pages, which allow for template-based page development. A web application can have one or more master pages, which can be nested. Master templates have place-holder controls, called ContentPlaceHolders to denote where the dynamic content goes, as well as HTML and JavaScript shared across child pages. Child pages use those ContentPlaceHolder controls, which must be mapped to the place-holder of the master page that the content page is populating. The rest of the page is defined by the shared parts of the master page, much like a mail merge in a word processor. All markup and server controls in the content page must be placed within the ContentPlaceHolder control. When a request is made for a content page, ASP. NET merges the output of the content page with the output of the master page, and sends the output to the user. The master page remains fully accessible to the content page. This means that the content page may still manipulate headers, change title, configure caching etc. If the master page exposes public properties or methods (e. . for setting copyright notices) the content page can use these as well. Performance ASP. NET aims for performance benefits over other script-based technologies (including Classic ASP) by compiling the server-side code to one or more DLL files on the web server. This compilation happens automatically the first time a page is requested (which means the developer need not perform a separate com pilation step for pages). This feature provides the ease of development offered by scripting languages with the performance benefits of a compiled binary. However, the compilation might cause a noticeable but short delay to the web user when the newly-edited page is first requested from the web server, but wont again unless the page requested is updated further. The ASPX and other resource files are placed in a virtual host on an Internet Information Services server (or other compatible ASP. NET servers; see Other Implementations, below). The first time a client requests a page, the . NET framework parses and compiles the file(s) into a . NET assembly and sends the response; subsequent requests are served from the DLL files. By default ASP. NET will compile the entire site in batches of 1000 files upon first request. If the compilation delay is causing problems, the batch size or the compilation strategy may be tweaked. Developers can also choose to pre-compile their code before deployment, eliminating the need for just-in-time compilation in a production environment. Database Queries The most common operation in SQL databases is the query, which is performed with the declarative SELECT keyword. SELECT retrieves data from a specified table, or multiple related tables, in a database. While often grouped with Data Manipulation Language (DML) statements, the standard SELECT query is considered separate from SQL DML, as it has no persistent effects on the data stored in a database. Note that there are some platform-specific variations of SELECT that can persist their effects in a database, such as the SELECT INTO syntax that exists in some databases. SQL queries allow the user to specify a description of the desired result set, but it is left to the devices of the database management system (DBMS) to plan, optimize, and perform the physical operations necessary to produce that result set in as efficient a manner as possible. An SQL query includes a list of columns to be included in the final result immediately following the SELECT keyword. An asterisk (*) can also be used as a wildcard indicator to specify that all available columns of a table (or multiple tables) are to be returned. SELECT is the most complex statement in SQL, with several optional keywords and clauses, including: The FROM clause which indicates the source table or tables from which the data is to be retrieved. The FROM clause can include optional JOIN clauses to join related tables to one another based on user-specified criteria. The WHERE clause includes a comparison predicate, which is used to restrict the number of rows returned by the query. The WHERE clause is applied before the GROUP BY clause. The WHERE clause eliminates all rows from the result set where the comparison predicate does not evaluate to True. The GROUP BY clause is used to combine, or group, rows with related values into elements of a smaller set of rows. GROUP BY is often used in conjunction with SQL aggregate functions or to eliminate duplicate rows from a result set. The HAVING clause includes a comparison predicate used to eliminate rows after the GROUP BY clause is applied to the result set. Because it acts on the results of the GROUP BY clause, aggregate functions can be used in the HAVING clause predicate. The ORDER BY clause is used to identify which columns are used to sort the resulting data, and in which order they should be sorted (options are ascending or descending). The order of rows returned by an SQL query is never guaranteed unless an ORDER BY clause is specified. The following is an example of a SELECT query that returns a list of expensive books. The query retrieves all rows from the Book table in which the price column contains a value greater than 100. 00. The result is sorted in ascending order by title. The asterisk (*) in the select list indicates that all columns of the Book table should be included in the result set. SELECT * FROM Book WHERE price ; 100. 00 ORDER BY title; The example below demonstrates the use of multiple tables in a join, grouping, and aggregation in an SQL query, by returning a list of books and the number of authors associated with each book. SELECT Book. title, count (*) AS Authors FROM Book JOIN Book_author ON Book. isbn = Book_author. isbn GROUP BY Book. title; Example output might resemble the following: Title Authors s and Guide 3 The Joy of SQL 1 How to use Wikipedia 2 Pitfalls of SQL 1 How SQL Saved my Dog 1 (The underscore character _ is often used as part of table and column names to separate descriptive words because other punctuation tends to conflict with SQL syntax. For example, a dash - would be interpreted as a minus sign. ) Under the precondition that isbn is the only common column name of the two tables and that a column named title only xists in the Books table, the above query could be rewritten in the following form: SELECT title, count (*) AS Authors FROM Book NATURAL JOIN Book_author GROUP BY title; However, many vendors either do not support this approach, or it requires certain column naming conventions. Thus, it is less common in practice. Data retrieval is very often combined with data projection when the user is looking for calculated values and not just the verbatim data stored in primitive data types, or when the data needs to be expressed in a form that is d ifferent from how its stored. SQL allows the use of expressions in the select list to project data, as in the following example which returns a list of books that cost more than 100. 00 with an additional sales_tax column containing a sales tax figure calculated at 6% of the price. SELECT isbn, title, price, price * 0. 06 AS sales_tax FROM Book WHERE price ; 100. 00 ORDER BY title; Some modern day SQL queries may include extra WHERE statements that are conditional to each other. They may look like this example: SELECT isbn, title, price, date FROM Book WHERE price ; 100. 00 AND (date = 16042004 OR date = 16042005) ORDER BY title; Chapter 7 System Design E-R DIAGRAM: The following DFD shows how the working of a reservation system could be smoothly managed: DETAIL DESCRIPTION OF DATA FLOW DIAGRAM: We have STARBUS as our database and some of our tables (relation) are such as AGENT_BASIC_INFO, FEEDBACK, PASSANGER_INFO, STATIS and TIMELIST STARBUS In our table AGENT_BASIC_INFO we have following field such as agent_id, agent_name, agent_name, agent_fname, agent_shop_name, agent_shop_address, agent_shop_city, agent_phon_number etc. AGENT_BASIC_INFO In our FEEDBACK table we have fields like name, Email, Phon, Subject, Comment, and User_type. In our table PASSANGER_INFO we have filed like bill_no, c_name, c_phone, c_to, c_from, c_time, Ttalseat, Seatnumber, Amount, Agent_id and Status. In the table of TIME_LIST we have fields such as Sno, Satation_name, Rate_per_seat, Time, Reach_time and Bus_number. PROCESS LOGIC: As the privatization of buses is increasing thus the need of its smooth management is also increasing the more we could facilitate the customers, the more they are comfortable with us, the more customers we have visiting our reservation unit . the above tables and modules facilitates many logics like: ? Number of buses in one unit Number of computers in particular department ?Number of users in a department ?Which bus has what tour on which day ?What are time table for different buses of different department ? What are the schedule for buses ?Schedule of a particular bus ?How many buses are there ?Each bus has how many seats ?How many seats are occupied ?Advance booking for seat ?How much money is collected in a particular day ?Bills for different customers ?Which seat has booked by agent 1. Index page This webpage is the starting page of the Website. It gives the followings: ?TollFree number of the other city. Display advantage of the StarBus ?Links for Agent list and seat status. ?Links for Feedback, FAQ, Terms and Conditions. 2. Status. As in the above image the Status webpage is displaying: ?Accessed by anyone. ?Information about the booking which seat is booked and which is empty. 3. Agent name. As in the above image the Agent name webpage is displaying: ?Accessed by anyone. ?Contains information about name, address and phone number of the agent. 4. Feedback As in the above image Feedback webpage is displaying: ?This page is access by any user ?Anyone can give feedback related to the site or services. Links for Terms and Condition’s and Policy and Privacy. 5. FAQ As in the above image FAQ webpage is displaying: ?This page is access by any user ?Contain information about tour a nd services of web site. Such as how many agent office are there and what is the mode Of the pament. 6. Privacy Policy: As in the above image the Privacy and Policy webpage is displaying: ?This page is access by any user ?This page say that when customer using our services, we required information about customer his/her name, age, route and email so that we can inform them to there email also. 7. Terms and Conditions. As in the above image the Terms and Conditions webpage is displaying: ?Accessed by anyone. ?Useful for customer ?Contain information when to reach the starting point and what should do, in case when our ticket is lost. 8. Login page As in the image Login webpage is displaying: ?Accessed by the agent. ?Agent entered its user name and password and click on login. ?Contain link for Forget Password. 9. Forget Password Page As in the image Forget Password webpage is displaying: ?It required user name who forget its password and then click on Next button. ?And also provide link for administration and other. 0. Identity Confirmation. As in the above image Identify Confirmation for user webpage is displaying: ?The Question you have select at the time of registration. ?You need to enter the answer for that question. ?After click on Next button. You will get your password on the show password webpage. 11. Ticket Booking page. As in the above image the ticket booking page is displaying: ?Only a ccessed by the agent. ?Select the destination, departure date and time. 11. Select Seat page As in the above image the Select Seat page is displaying: ?Only accessed by the agent. ?Red seat indicates booked seat. You can choose rest of the seat. It will be converted into green seat. 12. Customer Information page As in the above image the Customer Information webpage is displaying: ?After selecting the seat. ?Agent enters the name and phnumber of the customer. ?Click on Go button for printing the ticket. 13. Ticket Print page As in the above image the Ticket print webpage is displaying: ?This page prints the Customer ticket. ?This contain customer information such as name, destination, Number of seat. ?These also reduce the agent balance. 14. Search Ticket. As in the above image the Ticket Search webpage is displaying: Only accessed by the Agent and Administration. ? Using PNR number, Agent can search the ticket. 15. Ticket Cancellation As in the above image the Ticket cancellation webpage is displaying- ?Only accessed by the Agent and Administration ?Using PNR number, Agent can see the status ticket. 16. Change Password As in the above image the Change password web page is displaying: ?Only accessed by the Agent ?Agent can change password by entering the old and new password Administrator Section: 17. Create Agent: As in the above image the Change password web page is displaying: ?Only accessed by the Administrator. New agents are added by this page ?Required following information:- ?Username ?Password ?Email ?Security Question. ?Security Answer. ?After click on Create user button it will send you on Agent Basic Information webpage. 18. Agent Basic Information page As in the above image the agent’s Basic information web page is displaying: ?Agents Basic Information are added by this page ? Required following information are :- ?Name ?Father’s Name ?Shop Name ?Shop City ?Shop phone number ?Mobile Number ?Deposit amount 19. Agent List page As in the above image the agent’s List web page is displaying: Only accessed by the Administrator. ?Displaying Agent information such as:- ?Agent ID ?Name ?Shop Name ?Shop City ?Current Balance ?Mobile Number 20. A gent Deposit Amount Page As in the above image the agent’s Deposit Amount web page is displaying: ?Only accessed by the Administrator. ?Requires agent name and amount he wants to deposit. 21. Search Agent Page Bus List: Feedback List: Chapter 8 System Testing System Testing Once source code has been generated, software must be tested to uncover (and correct) as many errors as possible before delivery to customer. Our goal is to design a series of test cases that have a high likelihood of finding errors. To uncover the errors software techniques are used. These techniques provide systematic guidance for designing test that (1) Exercise the internal logic of software components, and (2) Exercise the input and output domains of the program to uncover errors in program function, behavior and performance. 8. 1 Steps. Software is tested from two different perspectives: (1)Internal program logic is exercised using â€Å"White box† test case design techniques. (2)Software requirements are exercised using â€Å"block box† test case design techniques. In both cases, the intent is to find the maximum number of errors with the minimum amount of effort and time. 8. 2 Strategies A strategy for software testing must accommodate low-level tests that are necessary to verify that a small source code segment has been correctly implemented as well as high-level tests that validate major system functions against customer requirements. A strategy must provide guidance for the practitioner and a set of milestones for the manager. Because the steps of the test strategy occur at a time when deadline pressure begins to rise, progress must be measurable and problems must surface as earl as possible. Following testing techniques are well known and the same strategy is adopted during this project testing. 8. 2. 1 Unit testing: Unit testing focuses verification effort on the smallest unit of software design- the software component or module. The unit test is white-box oriented. The module interface is tested to ensure that information properly flows into and of the program unit under test the local data structure has been examined to ensure that data stored temporarily maintains its integrity during all steps in an algorithm’s execution. Boundary conditions are tested to ensure that the module operated properly at boundaries established to limit or restrict processing. All independent paths through the control structure are exercised to ensure that all statements in a module haven executed at least once. 8. 2. 2 Integration testing: Integration testing is a systematic technique for constructing the program structure while at the same time conducting tests to uncover errors associated with interfacing. The objective of this test is to take unit tested components and build a program structure that has been dictated by design. . 2. 3 Validation testing: At the culmination of integration testing, software is completely assembled as a package, interfacing errors have been uncovered and corrected, and a final series of software tests- validation testing-may begin. Validation can be defined in many ways, but a simple definition is that validation succeeds when software functions in a manner that can be reasonably expected b y the customer. 8. 2. 4 System testing:System testing is actually a series of different tests whose primary purpose is to fully exercise the computer-based system. Below we have described the two types of testing which have been taken for this project. 8. 2. 4. 1 Security testing Any computer-based system that manages sensitive information causes actions that can improperly harm (or benefit) individuals is a target for improper or illegal penetration. Penetration spans a broad range of activities: hackers who attempt to penetrate system for sport; disgruntled employees who attempt to penetrate for revenge; dishonest individuals who attempt to penetrate for illicit personal gain. For security purposes, when anyone who is not authorized user cannot penetrate this system. When programs first load it check for correct username and password. If any fails to act according will be simply ignored by the system. 8. 2. 4. 2 Performance Testing Performance testing is designed to test the run-time performance of software within the context of an integrated system. Performance testing occurs throughout all steps in the testing process. Even at the unit level, the performance of an individual module may be assessed as white-box tests are conducted. 8. 3. Criteria for Completion of Testing Every time the customer/user executes a compute program, the program is being tested. This sobering fact underlines the importance of other software quality assurance activities. As much time we run our project that is still sort of testing as Musa and Ackerman said. They have suggested a response that is based on statistical criteria: â€Å"No, we cannot be absolutely certain that the software will never fail, but relative to a theoretically sound and experimentally validated statistical model, we have done sufficient testing to say with 95 percent confidence that the probability of 1000 CPU hours of failure free operation in a probabilistically defined environment is at least 0. 95. † 8. 4 Validation Checks Software testing is one element of broader topic that is often referred to as verification and validation. Verification refers to the set of activities that ensure that software correctly implements a specific function. Validation refers to a different set of activities that ensure that the software that has been built is traceable to customer requirements . Boehm state this another way: Verification:â€Å"Are we building the product right? † Validation:â€Å"Are we building the right product? † Validation checks are useful when we specify the nature of data input. Let us elaborate what I mean. In this project while entering the data to many text box you will find the use of validation checks. When you try to input wrong data. Your entry will be automatically abandoned. In the very beginning of the project when user wishes to enter into the project, he has to supply the password. This password is validated to certain string, till user won’t supply correct word of string for password he cannot succeed. When you try to edit the record for the trainee in Operation division you will find the validation checks. If you supply the number (digits) for name text box, you won’t get the entry; similarly if you data for trainee code in text (string) format it will be simply abandoned. A validation check facilitates us to work in a greater way. It become necessary for certain Applications like this. Chapter 9 System Implementation Specification, regardless of the mode through which we accomplish it, may be viewed as a representation process. Requirements are represented in manner that ultimately leads to successful software implementation. 9. 1 Specification principles A number of specification principles, adapted from the work of balzer and Goodman can be proposed: 1. Separate functionality from implementation. 2. Develop a model of the desired behavior of a system that encompasses date and the functional responses of a system to various stimuli from the environment. 3. Establish the context in which software operates by specifying the manner in which other system components interact with software. 4. Define the environment in which the system operates. 5. Create a cognitive model rather than a design or implementation model. The cognitive model describes a system as perceived by its user community. 6. Recognize that â€Å"the specifications must be tolerant of incompleteness and augmentable. † 7. Establish the content and structure of a specification in a way that will enable it to be amenable to change. This list of basic specification principles provides a basis for representing software requirements. However, principles must be translated into realization. 9. 1. 2 Representation As we know software requirement may be specified in a variety of ways. However, if requirements are committed to paper a simple set of guidelines is well worth following: Representation format and content should be relevant to the problem. A general outline for the contents of a Software Requirements Specification can be developed. However, the representation forms contained within the specification are likely to vary with the application area. For example, for our automation system we used different symbology, diagrams. Information contained within the specification should be nested. Representations should reveal layers of information so that a reader can move to the level of detail required. Paragraph and diagram numbering schemes should indicate the level of detail that is being presented. It is sometimes worthwhile to present the same information at different levels of abstraction to aid in understanding. Similar guidelines are adhered for my project. Chapter 10 Conclusion To conclude, Project Grid works like a component which can access all the databases and picks up different functions. It overcomes the many limitations incorporated in the . NET Framework. Among the many features availed by the project, the main among them are: Simple editing Insertion of individual images on each cell Insertion of individual colors on each cell Flicker free scrolling Drop-down grid effect Placing of any type of control anywhere in the grid Chapter 11 Scope of the Project Future scope of the project: The project has a very vast scope in future. The project can be implemented on internet in future. Project can be updated in near future as and when requirement for the same arises, as it is very flexible in terms of expansion. With the proposed software of Web Space Manager ready and fully functional the client is now able to manage and hence run the entire work in a much better, accurate and error free manner. The following are the future scope for the project: ?The number of levels that the software is handling can be made unlimited in future from the current status of handling up to N levels as currently laid down by the software. Efficiency can be further enhanced and boosted up to a great extent by normalizing and de-normalizing the database tables used in the project as well as taking the kind of the alternative set of data structures and advanced calculation algorithms available. We can in future generalize the application from its current customized status wherein other vendors developing and working on similar applications can utilize this software and make changes to it according to their business needs. ?Faster processing of information as compared to the current system with high accuracy and reliability. ?Automatic and error free report generation as per the specified format with ease. ?Automatic calculation and generation of correct and precise Bills thus reducing much of the workload on the accounting staff and the errors arising due to manual calculations. With a fully automated solution, lesser staff, better space utilization and peaceful work environment, the company is bound to experience high turnover. A future application of this system lies in the fact that the proposed system would remain relevant in the future. In case there be any additions or deletion of the services, addition or deletion of any reseller in any type of modification in future can be implemented easily. The data collected by the system will be useful for some other purposes also. All these result in high client-satisfaction, hence, more and more business for the company that will scale the company business to new heights in the forthcoming future. References References: Complete Reference of C# Programming in C# Deitel Deitel www. w3schools. com http://en. wikipedia. org The principles of Software Engineering – Roger S. Pressman Software Engineering – Hudson MSDN help provided by Microsoft . NET Object Oriented Programming – Deitel Deitel