Wednesday, December 25, 2019
Discussion Research On Parent Involvement Essay - 805 Words
Prior Research on Parent Involvement in Education Before turning to our qualitative study of parent involvement in urban char - ter schools, the following sections outline the prior research on the benefits of parent involvement, the barriers to involvement that exist, and the potential of the charter school context to reduce these barriers. Benefits of Parent Involvement Decades of research point to the numerous benefits of parent involvement in education for not only students but also for the parents involved, the school, and the wider community (Barnard, 2004; Epstein, 2001; Fan Chen, 2001; Henderson Mapp, 2002; Jeynes, 2003, 2007; Lee Bowen, 2006). De - spite the challenges in establishing a causal link between parent involvement and student achievement, studies utilizing large databases have shown positive and significant effects of parent involvement on both academic and behavioral outcomes (Fan Chen, 2001; Jeynes, 2003, 2007). For example, research has found that parent involvement is related to a host of student achievement indi - cators, including better grades, attendance, attitudes, expectations, homework completion, and state test results (Astone McLanahan, 1991; Cancio, West, Young, 2004; Dearing, McCartney, Weiss, Kreider, Simpkins, 2004; Gut - man Midgley, 2000; Izzo, Weissberg, Kasprow, Fendrich, 1999; Senechal LeFevre, 2002; Sheldon, 2003). Additional academic outcomes such as lower dropout rates (Rumberger, 1995), fewerShow MoreRelatedInfluence Of Parenting Styles And Practices Globally1302 Words à |à 6 Pagesstyles and practices globally. The attitude and response of parents to various parenting practices is based on the knowledge or information they are exposed to or available to them. This study intends to examine the influence of education on parentââ¬â¢s involvement in raising their children especially outside of school. Quantitative method will be utilized based on the secondary data from national survey of parents. LITERATURE REVIEW Parent education probably began with the first grandmother in the caveRead MoreResearch Methodology For Adopting Mixed Method Approach1282 Words à |à 6 PagesMethodology: In this chapter, the research methods commonly employed in social sciences, namely, qualitative, quantitative and mixed methods will first be discussed. I shall then describe my own research design, including the philosophical approach I adopted. This approach will be explained by identifying the ontological and epistemological standpoints assumed. The research strategy will be clarified, and the sampling procedures and participants of the study will be described. The processes of dataRead MoreParental Involvement And Children s Education813 Words à |à 4 PagesHow can parents be partners in their children learning? Parental involvement in children s education is very beneficial for both children and parents. First of all, it helps to enhances children s self-esteem, improves children s academic achievement, improves parent-child relationships, helps parents a better understanding of the schooling process. But most of parents are over busy with their work, so teachersââ¬â¢ help sometimes become vital in building bridge in between parents and theirRead MoreThe Role Of Parental Involvement And Children s Academic Success Essay1443 Words à |à 6 Pagesmethodology and research design are also reviewed including, research questions, participant profiles, and the interview process. A summary of findings, summary of emergent themes, and discussion of the researcherââ¬â¢s conclusions in relation to the collected data concludes in the chapter. Summary of the Literature Parent involvement in education is nothing new in the American education system. Educators have long debated the potential benefits, impact, and relevancy that parent involvement has on the academicRead MoreInvolving Young Children in the Decision-Making Process in Families and Schools1246 Words à |à 5 Pagesis right or wrong for them; they always need some guidance and advice from their parents, elder brothers and sisters on matters related to their studies, health, eating and playing habits, etc. This paper explains the significance of childrens involvement in decision making in their families and schools. To support this argument, the benefits of children involvement are explained in the light of some relevant research studies. Why should children be involved in decision making? Involving childrenRead MoreThe Impact Of Individual And Institutional Characteristics On Teachers Perceptions Essay1283 Words à |à 6 Pagesrestrictions in past investigations of educatorsââ¬â¢ view of parental involvement in education. Data was collected from 199 educators from 23 schools within a single school district in a mid-sized southern metropolitan city. The authors used the School Community Survey (SCS) to ask demographic questions and 65-questions in the ââ¬Å"teacher sectionâ⬠to gather information that describes the school community from the viewpoints of parents, students, teachers, principals. The findings suggested that SCS measuresRead MoreThe Academic Outcomes For Children1384 Words à |à 6 Pagesvery positive. The literature has lauded parental involvement as an effective strategy to increase student achievement, but schools still struggle with how to effectively involve parents of color and low-income families (Bower Griffin, 2011). Schoolââ¬â¢s Responsibility NCLB requires schools to use at least 1% of their Title I funds to develop a parent involvement program, explain the curriculum, standards, and assessment to parents, develop a parent-school compact outlining procedures for collaborationRead MoreThe Involvement Of Child Protective Services968 Words à |à 4 Pagesthis research proposal the independent variable is the involvement of Child Protective Services. Child Protective Serviceââ¬â¢s involvement is measured based on a questionnaire, which ask the participant have they ever had any previous Child Protective Service involvement, and if so when? The research will include both participants who have answered yes and no to this question. Furthermore, if the participant has answered yes, they will only be included if the Child Protective Service involvement wasRead MoreEvaluation Of Early Literacy Program Essay1163 Words à |à 5 Pageschildren to new parents. Our early literacy program ââ¬Å"Early Rocket Readersâ⬠consist of a 6 week program, our expected outcome is for participating parents to gain the knowledge of the positive effects of early literacy, the research behind the lack of literacy and book resources. Session one is our introduction to the program and our goal is to stress the importance of early literacy for children to new parents. Our first session would focus on the following objectives: 1- Parents will learn theRead MoreChildren Are the Future: Get Involved1319 Words à |à 5 Pagesthe amount of support students receive from parents and other caring adults. Research has found that, overall, parental involvement and support from other caring adults increases the chances of students graduating from high school. Historical Understanding Historically, parents were extremely involved in their childrenââ¬â¢s education. Back before one-class schools, a childââ¬â¢s education was in the hands of their parents (Anguiano, 2004). Parental involvement began to decrease during the mid-1800s when
Tuesday, December 17, 2019
The Rationale Of Social Media - 908 Words
The Rationale of Social Media Now days, more companies are using social media as a way to collect and analyze data from customers and competitors. Social media rationale responds to the urge of companies to manage their relationships with customers on a strategic way and in order for the social media to be an effective tool, it must be incorporated into the marketing, sales and operations of a company by using a social media platform. A social media platform is the technology that allows a company to centralize in one place all the customersââ¬â¢ interactions by finding out who is talking about the company products and policies. The platform also allows the integration of both ââ¬Å"onboardâ⬠communities (sites owned by the company) and ââ¬Å"off boardâ⬠communities (those that are not owned by the company); the platform receives social media content from many sources as blog posts, Twitter tweets, social networking sites, discussion boards and product reviews sites among others. After the data is collected, software is used to analyze the information and depending on the trends, managers can focus on how to respond to customers by creating strategies. As described by Smith and Wollan, the platform has six main components: 1. Customer services- On board social media channels that include blogs, ratings and reviews, referrals and sharing, forums, user created content management, member profile management, social networking, cross channel synchronization, ideation or idea management. 2.Show MoreRelatedEssay about Mkt 500 Week 1 to Week 11 Discussion1413 Words à |à 6 Pagesyou are pitching your favorite social media site to the ââ¬Å"Shark Tankâ⬠panel for a possible investment opportunity. Examine the 4Pââ¬â¢s (price, product, place, promotion) of your favorite social media Website. Create a brief pitch for the social media site to present to the ââ¬Å"Shark Tankâ⬠panel. â⬠¢ Examine the components of a marketing plan. Determine the component you believe to be the most important and the component you believe to be the least important. Provide a rationale for your response. MKT 500 WeekRead MoreIkea Media Plan1132 Words à |à 5 PagesMedia Plan Marketing Objective 1: To increase awareness of local IKEA store locations in the U.S. among Generation Y individuals between ages 23 and 30 by 25% by January 2014. Media Objective 1: Reach 30% of Generation Y (ages 23-30) at least twice a month during 2013 with information regarding their IKEA store within 200 miles. Media Strategy: Use direct mail campaign that highlights the location of the nearest store with IKEA facts, and promotions. The mailings are to be released atRead MoreMarketing Objectives Of A Company s Website1562 Words à |à 7 Pageswebsite â⬠¢ General brand building ââ¬â To increase brand recognition â⬠¢ Content Plans ââ¬â âž ¢ What content is included and why Answer ââ¬â The content that should be added to the website are social media links, about us info, reviews section, location and contact info and share options. â⬠¢ The reason why I feel social media links should be added in the website as it helps to attract traffic and increases online presence. â⬠¢ About us info should be included as it gives a personal touch to the website andRead MoreConcept Paper1098 Words à |à 5 PagesConcept Paper 1: During Election year, does this play a big role in increasing the countryââ¬â¢s Gross Domestic product? I. à Rationale In every country, gross domestic product (GDP) serves as an indicator to determine how well does the country performed for a specific period of time. It is an estimation of the value of the total goods and services it has produced. It matters to us when our countryââ¬â¢s gross domestic product constantly increases from period to period, but is there really a massiveRead MoreWhat Is The Closing Analysis Of The Advertising Campaign?1154 Words à |à 5 PagesThe following is the closing analysis in the exploration of the interest group, the National Organization for Women (NOW). First, this paper presents two pieces of media, which constitute an ad campaign for two current bills which have been proposed in the House of Representatives and the Senate. Next, it explains the rationale for employing the strategy of indirect lobbying in conducting this campaign. Additionally, it incorporates several key concepts covered in the Nownes (2013) textbook, InterestRead MoreResearch review example2093 Words à |à 7 Pagesstudy. Although brevity is always preferable, a study that develops and tests a new model does require a somewhat more thorough conceptual definition of the various influences in the model and, most importantly, a rationale for these influences. Conceptual definitions and rationale for the components in the model: The model has four components, exposure to pornography, pornographic realism, acceptance of recreational sex, and relationship intimacy. However, on p. 2, many more concepts are outlinedRead MoreLoan and Typical Financial Covenants Essay examples811 Words à |à 4 Pagesthe response. Suggest how the company may have presented the breach and / or responded differently once the breach was discovered. Provide support for your rationale.à Assess the ethical considerations for information privacy, indicating how these considerations should be addressed with a corporate policy. Provide support for your rationale. Information Protection and Privacy: The word privacy means different things to different people, it affect both personal and business. As individualsRead MoreThe Theory Of The Workplace Planning Essay794 Words à |à 4 Pagesin the organization. If these positions are not filled, the church can recruit externally with the use of social media. The purpose of internal recruitment before external recruit is to boost employee morale Established employees should not feel like they are being over-looked. Strategy Development: Theoretical Rationale: Theoretical rationale: Support your strategy development or rationale with psychological theory and proven consulting methods. The process of planning the workplace planning isRead MoreThe Effects Of Jurors Consulting The Internet And Social Media1620 Words à |à 7 Pagesits persuasiveness spreading into the social realm. Technologies such as mobile phones ââ¬â combined with search engines, blogs and social media, namely Twitter and Facebook ââ¬âhave become widespread. The effect of these types of technologies has become apparent in the courtroom and poses many new litigation challenges, ultimately impeding the administration of justice. This essay will be examining the effects of jurorsââ¬â¢ consulting the internet and social media, assessing the current law and procedureRead MoreRegulating The Digital Communication Nationally1218 Words à |à 5 Pagesof women, indigenous people and cultural diversity. This forms under the basis of which the media should not promote prejudice and intolerance of these issues. Ultimately, the regulation aims to terminate potentially harmful images from the m edia, protect adults from spontaneous material that is likely offensive against social values. Minimal requirements are necessary for Australian content on digital media outlets such as television or radio, so as to prohibit overseas content bypassing national
Monday, December 9, 2019
The Last Letter free essay sample
Itsbeen almost a year, and we are still sent mail with his address. I have grown soused to this that I can tell which they are before I even look. There are lettersand magazines of all shapes and sizes advertising cars, credit cards and varioushousehold products. There are impersonal letters, too, sent by his bank orcreditors, with big red stamps notifying the world that the address given nolonger has anyone living there. But now, when there is a Send to ForwardingAddress letter in the pile of mail, it is usually from his lawyer. That isthe hardest part about walking down the driveway to the mailbox: I dont knowwhat I will find. My granddad died last February. It happened on the lastday of vacation, the fastest, yet slowest, vacation I ever had. My sister and Istayed home while my parents flew to New York to go to the hospital where mygranddad lay and where he would eventually pass away. We will write a custom essay sample on The Last Letter or any similar topic specifically for you Do Not WasteYour Time HIRE WRITER Only 13.90 / page My granddad didnt want hisgranddaughters to see him like that so we were left at home, jumping every timethe phone rang. How was I supposed to say good-bye to him? My parentstold me to write a note telling him what I had been doing and how school was; hehad always been interested in my academics. So that was what I did. I wrote as ifeverything were normal, as if he were not lying in a hospital bed. Icouldnt just send an ordinary letter! This was going to be the last time I wouldcommunicate with him. There were so many things I had always wanted to tell himbut never did. I wanted him to know how much I respected him for emigrating withmy dad and aunts from England and working so many jobs so they could livecomfortably and get a good education. I wanted him to know how amazing it wasthat he was offered a fellowship at Princeton, and how admirable it was when heturned it down because he put his family first. I wanted him to know howspecial it was that he was a bombardier in World War II and had survived beingshot down into the ocean, drifting for days until he was rescued. I could go onlisting all the extraordinary things my granddad did, and it was hard not to inthat letter. Once I was done, I sent it overnight to the hospital. Therest of the day, and into that night, I worried that it would arrive too late.Then I got a phone call from my dad. He had gotten the letter and read it to mygranddad. According to my father, the letter had had a great effect on him andeveryone who overheard it. My granddads eyes had filled with tears, and he hadclearly been moved by what I had written. I am so glad that I finally toldhim how I felt. A few days later the phone rang, but this time I didntjump. It was over. My granddad had lost his battle. The Send toForwarded Address letters started appearing a few weeks after my parentsreturned home. And they still come; every few days or so, a letter will stick outwith that telltale stamp. That is the hardest part of walking down the drivewayto the mailbox: I dont know what to expect. I hate sifting through the piles andseeing the letters with my granddads name. But at the same time, I dread the daythose letters stop coming.
Sunday, December 1, 2019
The Kray twins were born in 1933 Essay Example
The Kray twins were born in 1933 Paper The Kray twins were born in 1933. They lived in the east end of London and soon took up the traditional way of life that their family had led for generations. They got involved with two local gangsters Billy Hill and Jack Comer. This is what eventually led to their rise to gangland supremos. A popular view, is that the media orchestrated the Krays transition from villains to heroes, antidisestablishmentarianists to conquering idols. There are various sources to back up these two statements and the question itself suggests that this is the case. There will always be different views on the Krays, some more sensationalist papers will portray them as being glamorous and that their lifes that they led were fine and generally above the law. The source by Gordon Burn of the Observer sport monthly does just this. The Krays are compared to Ronnie O Sullivan and his dad. The fact that his dad is in prison for murder is not really touched upon. Their situation is described as, no problem. Nothing is a problem. Lovely. Further into the article more famous gangsters and hoodlums are mentioned including the Richardsons, the Lambrinous and the Maltese Messina brothers. We will write a custom essay sample on The Kray twins were born in 1933 specifically for you for only $16.38 $13.9/page Order now We will write a custom essay sample on The Kray twins were born in 1933 specifically for you FOR ONLY $16.38 $13.9/page Hire Writer We will write a custom essay sample on The Kray twins were born in 1933 specifically for you FOR ONLY $16.38 $13.9/page Hire Writer The paper continues to glorify the O Sullivans by hailing them the fighting OSullivans. Some similarities are suggested between the Krays and the O Sullivans, mainly that they both have a strong sense of family loyalty and stick up for each other. With newspaper articles like this we can see how the Krays may have been made from villains into heroes. There are other sources that take a different view to this. The cult of violence, by John Pearson is another modern article but is more blunt and doesnt seem to side with the Krays. It expressively calls them killers on more than one occasion. He also talks about their less endearing qualities that he picked up on. This source, most importantly, tells us that the Krays set him up to tell the world about their killings and butchery. This deffinatly means that the Krays tried to manipulate the media to increase their fame and recognition. This source also shows that the whole of the media were not totally bias in the Krays favour and that some people tell it how it really was. If there is one source that is most guilty in the glamorisation of the Krays and of the gangster lifestyle then its The Kray twins; Brothers in arms by Thomas L. Jones. The source reminds us of how poor the conditions were for the people of the East End. At this point the article might be trying to make us feel sorry for the way that the Kray twins grew up. It could also provide an explanation of why they turned out the way they did. There are other points in this source that also show positive points about famous criminals by using the word celebrated about Jack the Ripper. The Krays are later described as famous and infamous gangsters. This, as well as other parts of the source like their success, and ease of achievement show the support the Krays had from the media and helps us to understand how they became famous. The source also tries to excuse the two murders that they committed by calling the victims miserable, lowlife street thugs with little to redeem and as about as sympathetic a due as Goebbels and Himmler. The source then goes on to say that they received the heaviest prison sentence ever handed down by a British court of law. This source would be heavily influential on people about the Krays and you can deffinatly see how they were made from villains into heroes. Another important aspect of the media support of the Krays is that there is a film made about them. This is deffinate proof that at least some media glamorised them. The Krays had previously tried to have a film made about them. That is a good example of how they influenced the media to make themselves into famous heroes. The front cover of the Krays film shows them wearing dark suits and ties. Therefore rich and successful. There is another source which gives us more of an insight into why the media might have wanted to give the Krays so much attention. It is by Edward Lawson and it is called The story of the Daily Telegraph. It tells us how the paper thrives on crime and how most writers and biographers do. It also admits that sometimes papers over do it when reporting crime. This suggest that it might not have been down to the Krays, to get their fame, more the papers trying to make some money. There is one more source that I will discuss. That is End of a murderous duo by John Macleod from the Herald. This takes a negative view on the Krays, Society has earned a rest from your activities. This source shows that their were papers apposed to the Krays and that they couldnt influence everyone. However the article was written in 2000 and so it doesnt necessarily reflect what the papers were saying in the sixties and the seventies. The papers have said a lot of things about the Krays throughout the decades. Some have been good and others have taken more of a negative approach to their existence. There has been a film made about them and numerous documentaries. Their rise to fame was partly due to the medias hype about them and partly due to the way they manipulated the media into supporting them and getting good publicity from them. There are many source we can use to support this view like the Observer sport monthly and The Kray Twins: brothers in arms. They show us how crime is glamorised by the media and how the Krays were able to manipulate the papers. So overall the Krays would not have made the transition from low life villains to famous heroes without the help of the papers and the media.
Tuesday, November 26, 2019
Free Essays on Angle Of Repose And Ego States
Throughout the entire novel, the narrator, Lyman Ward, illustrates all of the three ego states, Parent, Adult and Child. Lymanââ¬â¢s physical state and encounters with others influence his ego state status. The retired professorââ¬â¢s ego states are brought out by his work and the people he interacts with. When Lyman Ward interacts with Shelly Rasmussen, the aide that performs secretarial duties, he normally is in his Parent ego state. Shellyââ¬â¢s choice of life style and intellect creates a conflict between the professor and the aide. When she begins to question him on his grandmotherââ¬â¢s sexual conduct he attempts to control her actions by questioning her and telling her that was the way it was ââ¬Å"thatââ¬â¢s what they would have done, turned out the light.â⬠Also, when talking about his ex-wife, Lyman Ward becomes short and quickly ends a conversation with his son, Rodman. Lyman directs Rodmanââ¬â¢s behavior by ending the conversation quickly and telling him that he has nothing to say to her and that should be told to her. When writing his novel, Lyman, is in the Adult ego state. With little emotion, Lyman dictates the historical account of his grandmotherââ¬â¢s transient life. Being a retired professor of history, he describes the all the people and places with a historical accuracy that his career allowed him. Lyman describes all the events in his grandmotherââ¬â¢s life as they were, helping the reader of his book understand and learn the history of his life. Lyman goes into the Adult ego state once again when he thinking about Adaââ¬â¢s version of Shellyââ¬â¢s marital situation. He is trying to describe to himself what is correct and not correct and avoid the motherly biases that Ada has. Later in the book, Lyman questions himself. He questions his intentions, why he is doing what his is doing and why he is there. This is another sign that Lyman is in his Adult ego state. He is answering the questions with factual and neutral answers. Lyman... Free Essays on Angle Of Repose And Ego States Free Essays on Angle Of Repose And Ego States Throughout the entire novel, the narrator, Lyman Ward, illustrates all of the three ego states, Parent, Adult and Child. Lymanââ¬â¢s physical state and encounters with others influence his ego state status. The retired professorââ¬â¢s ego states are brought out by his work and the people he interacts with. When Lyman Ward interacts with Shelly Rasmussen, the aide that performs secretarial duties, he normally is in his Parent ego state. Shellyââ¬â¢s choice of life style and intellect creates a conflict between the professor and the aide. When she begins to question him on his grandmotherââ¬â¢s sexual conduct he attempts to control her actions by questioning her and telling her that was the way it was ââ¬Å"thatââ¬â¢s what they would have done, turned out the light.â⬠Also, when talking about his ex-wife, Lyman Ward becomes short and quickly ends a conversation with his son, Rodman. Lyman directs Rodmanââ¬â¢s behavior by ending the conversation quickly and telling him that he has nothing to say to her and that should be told to her. When writing his novel, Lyman, is in the Adult ego state. With little emotion, Lyman dictates the historical account of his grandmotherââ¬â¢s transient life. Being a retired professor of history, he describes the all the people and places with a historical accuracy that his career allowed him. Lyman describes all the events in his grandmotherââ¬â¢s life as they were, helping the reader of his book understand and learn the history of his life. Lyman goes into the Adult ego state once again when he thinking about Adaââ¬â¢s version of Shellyââ¬â¢s marital situation. He is trying to describe to himself what is correct and not correct and avoid the motherly biases that Ada has. Later in the book, Lyman questions himself. He questions his intentions, why he is doing what his is doing and why he is there. This is another sign that Lyman is in his Adult ego state. He is answering the questions with factual and neutral answers. Lyman...
Saturday, November 23, 2019
Cmo queda ley Arizona SB1070 tras decisin de Corte
Cmo queda ley Arizona SB1070 tras decisin de Corte En 2010, el estado de Arizona inicià ³ con Ley SB1070 un empuje para tratar de restringir la inmigracià ³n indocumentada dentro de su territorio, siendo su ejemplo seguido por otros estados como Alabama, Georgia y Utah. Esta ley fue objeto de gran debate polà tico y su suerte se decidià ³ en las cortes federales. En este artà culo se informa sobre cules provisiones de la ley fueron impugnadas por la administracià ³n del presidente Barack Obama, quà © decidià ³ la Corte Suprema de Estados Unidos al respecto y quà © se puede aplicar de la Ley SB1071 en la actualidad y quà © no se puede, por considerarse inconstitucional. Partes de la ley SB1070 de Arizona que aplican Por decisià ³n conocida como Arizona vs. United States y con cinco votos contra tres de la Corte Suprema de Estados Unidos en junio de 2012 se decidià ³ que es constitucional la parte de la ley SB1070 que concede a los oficiales de policà a del estado de Arizona el poder para investigar el estatus migratorio de todas aquellas personas que detiene, arresta o para y de las que se sospeche razonablemente que pueden ser extranjeros indocumentados. Adems, siempre han aplicado porque nunca se les impugnà ³ las provisiones que establecen que el estado de Arizona, los condados y los municipios no pueden limitar la accià ³n de la policà a a la hora de aplicar las leyes federales de inmigracià ³n. Lo mismo aplica a la provisià ³n que autoriza castigar a toda persona que es contratada o contrata desde un vehà culo. No importa, en este à ºltimo caso, que el que contrate sea un ciudadano estadounidense. La ley convierte a esta actividad en ilegal tambià ©n para à ©l o ella, quien se arriesga a sufrir las consecuencias, incluida la posibilidad de perder el auto desde el que pretendà a contratar a una persona que se encontraba en la calle o en una esquina solicitando trabajo. Secciones de la ley SB1070 de Arizona que no aplican Las siguientes provisiones no aplican: En primer lugar, la obligacià ³n para todos los extranjeros mayores de 14 aà ±os y que pasen ms de 30 dà as en EEUU de que se registren con las autoridades federales y que lleven consigo en todo momento la documentacià ³n que pruebe que se han registrado. En segundo lugar, la disposicià ³n que consideraba delito tener o buscar un trabajo en Arizona si no se tiene un permiso federal para trabajar. En tercer lugar, la disposicià ³n que autorizaba a la policà a a detener a todos los inmigrantes de los que exista sospecha de que han cometido una ofensa que tiene como castigo la deportacià ³n. La situacià ³n migratoria actual en los Estados Unidos Con la llegada a la Casa Blanca del presidente Donald Trump se han producido importantes cambios en materia migratoria en relacià ³n a refugiados, asilados y tambià ©n a migrantes indocumentados. Asà , en la actualidad son prioridad para deportacià ³n prcticamente todos los indocumentados. La à ºnica excepcià ³n a esa regla general por el momento son los 750 mil muchachos conocidos como Dreamers y que estn protegidos por el programa de la Accià ³n Diferida, que se conoce por sus siglas en inglà ©s de DACA. Sin embargo, incluso para ellos la situacià ³n es complicada porque el el propio presidente puso fin a ese programa. Los muchachos con DACA aprobado con anterioridad a la decisià ³n del presidente Trump siguen amparados, al menos por el momento, por decisiones judiciales, pero no se admiten aplicaciones nuevas al programa. Por otro lado, los migrantes indocumentados tienen derechos que no pueden ser ignorados y es aconsejable que todos ellos conozcan quà © puede hacer y quà © deben callar en el caso de ser arrestados o detenidos. Sin embargo, mientras el gobierno federal y algunos estados endurecen las medidas para restringir la migracià ³n indocumentada, otros estados mantienen o promueven su proteccià ³n dentro de los là mites permitidos a las autoridades estatales o municipales, como por ejemplo, el caso de las ciudades santuario. Otro ejemplo es el de los estados que emiten licencias de manejar para los indocumentados, como es el caso de California, Colorado, Connecticut, Delaware, Hawaii, Illinois, Maryland, Nevada, Nuevo Mà ©xico, Utah, Vermont y Washington, asà como la ciudad de Washington D.C. la capital de Estados Unidos. Por à ºltimo, cabe destacar que dependiendo de las circunstancias de cada migrante, en ocasiones es posible encontrar un camino para regularizar la situacià ³n y obtener una tarjeta de residente permanente, tambià ©n conocida como green card. Este es un artà culo informativo. No es asesorà a legal de ningà ºn tipo. Puntos Clave de la Ley SB1070 de Arizona restrictiva de la migracià ³n indocumentada La Ley SB1070 de Arizona fue una de las primeras y duras con objeto de restringir la migracià ³n indocumentada en su territorio. Fue objeto de gran debate y la Corte Suprema decidià ³ que parte de la misma era inconstitucional.En la actualidad estn en vigor y pueden ser aplicadas las siguientes provisiones:La policà a puede informarse sobre el estatus migratorio de cualquier persona que para, arresta o detiene y de la que sospeche que puede estar en EE.UU. ilegalmente.La policà a de las ciudades y condados no puede impedir la aplicacià ³n de ninguna ley migratoria federal.Es ilegal contratar o ser contratado desde un vehà culo.Los migrantes mayores de 18 aà ±os estn obligados a llevar un documento que pruebe que estn en el paà s legalmente. Este es un artà culo informativo. No es asesorà a legal.
Thursday, November 21, 2019
The Tragic Decline of BlackBerry Essay Example | Topics and Well Written Essays - 500 words
The Tragic Decline of BlackBerry - Essay Example This coupled with the modern design that it consistently infuses with the minor details that separates it from the rest. But the unsparing competition in its arena has made BlackBerry almost desperate in its attempts to take part of the viability of the market against other specialized and well-established counterparts. In its quest to take a cut and compete with other giants such as Apple, Sony Ericsson, Google and Microsoft, its maker Research in Motion is constantly in its feet testing the market and looking for a solid niche that goes beyond its comfortable smartphone sphere. It has endeavoured into other business ventures that have fallen quite short of the expectations and overall appeal. The release of the BlackBerry Playbook that claimed to topple down Appleââ¬â¢s Ipad was an epic failure the users and techies dismissed the product as being a major let down. Harry McCracken in his article ââ¬Å"BlackBerry: Vision Neededâ⬠deemed it as being tremendously disappointing and this could be attributed to what Ben Bajarin in ââ¬Å"The Tragic Decline of BlackBerryâ⬠refers to as lost customer interest. These two articles recognize the problems that BlackBerry is facing. There must be a deeper look into contemplating first what products will identify with BlackBerry instead of merely releasing new ones for the sake of market share. McCracken unwittingly said that it is a good move for the company not to announce any new product at DevCon in San Francisco in contrast to what it did it 2010 that built the hype for Playbook. Instead, it is focusing on the new operating system called BBX. Bajarin is on the same page by saying that RIMââ¬â¢s attempt to partake of all the glamor fails to impress the actual customers who make use of the product. These types of exposure are all but the personality of Steve Jobs and the characteristic of Apple. BlackBerry need not get in on the mix and instead stick to what it does best.
Tuesday, November 19, 2019
Marketing Plan Term Paper Example | Topics and Well Written Essays - 2500 words - 3
Marketing Plan - Term Paper Example This market target is viable because the foods are purchased for consumption in offices, schools and sometimes at homes. This represents a real market audience. The NRO plays a fundamental role in meeting the demands of this market niche and expanding its market base of existing and new products. The NRO through its strong distribution channels intends to expand its market by advertising the Companyââ¬â¢s products. The NROââ¬â¢s employees in marketing and advertising section should embrace online marketing and ads to increase its customer base (Wood, 2003).2.2 Marketing strategy Online marketing should be facilitated by online marketing and advertising channels. Tools such as Google, twitter and other online platforms need to be used in a bid to expand awareness and accessibility of the General Mills products. The ads should include the types of products, prices and locations of retail outlets. Sustaining of the current international sales of $3 billion, the superiority of the companyââ¬â¢s brands should be strategically put into both the new and existing market niches (Luther, 2001). Equally, the NRO should break down the current barriers to expansive market. For instance, the issue of retail placement fragmentation, for the case of single-serving pre-prepared meal group should be rectified, defined and facilitated. The placement channel is fundamental in sustaining the product market through flow of product information to the market audience in a consistent manner with the current GIS distribution channels (Luther, 2001).
Sunday, November 17, 2019
Scene Analysis of Twelfth Night Essay Example for Free
Scene Analysis of Twelfth Night Essay Feste, the Fool, disguises himself as Sir Topas, a priest, and visits Malvolio in his imprisonment, under direction of Maria and Sir Toby. Malvolio is relieved to hear the voice of the priest and believes the priest might release him from his prison. Malvolio makes the claim that he is not insane and is wrongly imprisoned in darkness. Feste tells Malvolio that he is in a well-lit room and that the darkness is simply ignorance. Sir Toby becomes afraid that if this jest goes on for any longer, Olivia, his niece might kick him out of her house. Sir Toby urges Feste to talk to Malvolio as himself. Feste, however, is having a bit of fun with his new alter ego. Feste begins talking to Malvolio as himself, but he begins using both personas in the conversation. Malvolio still urges Feste that he is sane and asks Feste to bring him a pen, some paper and a light. Feste offers to retrieve the requested items. 3. This scene deals directly with the ideas of identity and insanity found throughout the play. Feste dresses like a priest in order to assume the identity of Sir Topas. However, Malvolio is in darkness and is incapable of seeing Feste. The disguise is not needed, but the usage of the disguise points to identity being a direct result of personal appearance. Feste must dress as a priest in order to act like a priest. Previously, Malvolio dressed rather absurdly and was, by the same logic inherent in Festeââ¬â¢s costuming, insane. The scene also changes the audienceââ¬â¢s perception of Malvolio. Earlier in the play, Malvolioââ¬â¢s character is a boring burden of sobriety on the community. As such a character, his humiliation seems warranted. In this scene, however, he is helpless. Feste treats Malvolio like a toy and attempts to convince him that he is truly insane. . The sceneââ¬â¢s location in the play breaks up the action involving Sebastian in the first and third scenes of Act IV. This sceneââ¬â¢s tone is lighter and comical in what would be a more serious act. It also adds the perspective of a brief passage of time between the two Sebastian scenes, thus allowing Oliviaââ¬â¢s character to depart and collect the priest that is to marry her to Sebastian. 5. This scene directly affects the tone of the final act of the play. Malvolioââ¬â¢s resistance to Feste as the fool insists he is mad helps portray Malvolio as he sole person that is fully aware of his own identity. Malvolio knows that he is sane, whereas insanity holds onto other more frenetic characters. His stalwart sanity makes him incapable of letting down his guard and joining in the fun. At the playââ¬â¢s close, Malvolio finds out that Olivia did not write the love note, and his imprisonment was the result of a practical joke. If Malvolio were capable of buying into Festeââ¬â¢s claims that he was insane, he might have been more accepting of the joke. Instead, he claims he will have his rev enge and adds a sour tone to the ending of the play.
Thursday, November 14, 2019
Computer Engineer :: essays research papers
Introduction and History Computer engineering is a very time consuming, challenging job. To be a good computer engineer you need years of experience and collage education. Computer engineers provide information and data processing for certain computer firms and organizations. They conduct research, design computers, and discover and use new principles and ideas of applying computers. I am going to tell you specific facts about the careers of computer engineers like payment, education needed, skills, responsibilities of the job, job outlook, and benefits of the job. Computer engineering started about 5,000 years ago in China when they invented the abacus. The abacus is a manual calculator in which you move beads back and forth on rods to add or subtract. Other inventors of simple computers include Blaise Pascal who came up with the arithmetic machine for his fatherââ¬â¢s work. Also Charles Babbage produced the Analytical Engine, which combined math calculations from one problem and applied it to solve other complex problems. The Analytical Engine is similar to todayââ¬â¢s computers. Occupationââ¬â¢s Duties A computer engineer has certain duties and responsibilities depending on the location and size of the firm he or she works for. Also the duties vary between job levels. If you work at a small firm, you will be set up on the firing line immediately and will be expected to make your boss money or youââ¬â¢ll be fired. Also in a smaller firm youââ¬â¢ll probably spend hours of pain-staking time trying to solve a problem that other engineers probably went over before. In larger firms, youââ¬â¢ll be hired probably as a junior computer engineer and work your way up to senor and maybe manager of engineers. If you enjoy challenging work and problem solving, your best place to be is in a small firm. If you enjoy problem solving but not to the severe degree as in a small firm, then your place is in a larger firm. Overall, the responsibilities and duties are basically the same. These duties include preparing cost benefit analysis on programs, determining copter soft and hardware needs, and debugging computers to eliminate errors. Benefits The benefits of being a computer engineer are unique to those people who like having challenges put in-front of them to solve. This job puts you in challenging positions that you have to problem solve to get out of. Also computer engineers have the ability to fix or even their own computers. Computer engineers get paid vacations, holidays, and sick days.
Tuesday, November 12, 2019
Nonverbal Essay
223 S. 2nd Street Sunbury, Pa. 17801 The date TITLE OF YOUR ESSAY On February 24, 2010 my supervisor, Roy Love and I had a 45 minute Meeting concerning a few of the problems, which I feel was a bad Reflection of our ability to perform what was required of us at Congra. I felt the problem of fellow co-worker taking too much Time off, which was putting too much burden on the rest of the Of my co -workers. We also discussed the problem of a fellow Co-worker Performing the duties require in his task assignment. Mr. Love and I We have worked together for about 1 year. I felt I knew him well enough to Know he was a fair man and did his job in a profession standard. When Mr. Love and I was talking I got myself not making direct Eye contact, while I was constantly rocking back and forward. I feel uncomfortable about talking about fellow co-workers I feel it wrong to talk behind the backs of my fellow workers. During our meeting I felt my tone of voice I felt my toner of voice getting louder when I was upset. Considering the meeting I Should not felt this way because it better to get your opinions Out in the open, so things ca gets resolved. When things go Unresolved there becomes a lack of communication, which Causes a lot angry displayed to the other co-worker. Then The company which contacted to perform this service in their Plant feels maybe they made a mistake hiring his cleaning This cleaning crew because they were living up to Congaââ¬â¢s standards. During the course of the meeting we ask matt to joint in the In this meeting. Mr. Love explains to him why he was asking to join the meeting. Mr. Love Explain the problem I was having with his absenteeism and his poor Job performs. I felt uncomfortable in talking with Matt because he Was a young man who did not take critizing lightly? He showed His angry in the way he stood and his express on his face. After the Meeting he did not talk to me the rest of the night. I discovered that I had a least 3 bad nonverbal listening habit, This needed a great deal of improvement. I need to learn a lot Move how to improve my eye contact skills. My posture needs a Great deal of improvement and try to go into a meeting with a Calmer altitude and I need to learn that he only my supervisor And he will not judge me for my opionions. When there is a problem Not be afraid to talk to him one on one. He appreciate my open and Honest concerning these problems and hope to get them resolved Quickly. My reaction to what I have learn during this meeting was I Need to stop and think before I enter a meeting in the way I Approach the person I am going to talk with and that what your Body language, gestures, eye contact and tone of voice is what The listener sees first of all. I feel I have a better understanding What other see and I plan to improve that in the future..
Saturday, November 9, 2019
Credit Card Debt
Many people use credit cards and most of the time the credit card is not used in the right moment. I believe that credit cards are not beneficial because they aren't used for the right things. It would be very different if they were used correctly, credit cards are to be used it case of an emergency,meaning not to be used when you are going to the 7-eleven to buy a bag of chips an a soda. It has shown that more than 75% percent of americans have been bankrupt or on the verge of it.There are more than 60% of americans that have credit card debt because they are using them for the wrong things. Facts have proven that the total U. S. credit card debt, is $793. 1 Billion. and Average credit card debt per household is 15,799. Most people do not understand that when you have a credit card it comes with alot of responsibility and i say that because there are more than 10% of americans have been victims of credit card theft it may not seem like alot but credit card theft is a very serious th ing.Most complaints come from adults within the ages 40-59, Nevada, Colorado, and New Hampshire have the highest rate of credit card fraud. Having a credit card is not what people think it is; people think that if i have a credit card then i do not have to have money with me, and its just free money but its very dangerous to have a credit card. Having a credit card can lead to bankrupt and going bankrupt can make you lose everything such as your car, house, and etc. , or it could be worse an you could be placed in jail for a long time.Just because you had a credit card and used it for the wrong thing and spent way to much moneyâ⬠¦ A credit card is nothing but trouble each and every type of way. The credit card companies and banks are getting richer, while most Americans are getting more in debt. The economy is in trouble, therefore, more and more people are relying on credit cards. In today's society we are constantly trying to get out of debt, but in the process of trying to ge t ourselves out of debt, we create more debt.One of the major problems that most of us are dealing with is credit card debt. Most credit card companies are not looking out for your best interest. They are constantly raising interest rates. Minimum payments are just enough to cover the finance charges. Most Americans should not use credit cards for the following reasons: it will create bad spending habits; you will incur more debt affecting credit score rating; and possibly make you a victim of identity theft.In my opinion, a credit card should be used for purchases that you are able to pay off in full upon receiving your statement, but most of us don't. Most people lack self control and tend to misuse the credit card. Credit cards should mainly be used for emergencies, but we tend to use them for everyday purchases such as: food, gas, clothing, etc. Some people are living in a borrowed lifestyle, because they purchase things they can't afford. People will spend more on a purchasing using a credit card than they would with cash. People that use credit cards tend to spend 12%-18% more on transactions than those who use cash (faithfitnessfinance. com). â⬠For example, if you are going to pay with a credit card in a fast food establishment, it is easier to get the large drink instead of the medium drink. When the statement arrives, most people will make the minimum payment on his/her credit card. The minimum payment only covers the finance charges, which will increase the amount of time it will take to pay the debt off. ââ¬Å"It will also increase the amount of interest you end up paying
Thursday, November 7, 2019
How to Avoid Embarrassing Editing Marks on Your Documents! MS Words Track Changes Program
How to Avoid Embarrassing Editing Marks on Your Documents! MS Words Track Changes Program Ever get a document back from an editor that has tons of red or blue lines (maybe even some green ones), and have no idea how to get rid of them all, or view the document the way itââ¬â¢s supposed to look?à This article is for you! [Thanks to Larry Sochrin, MBA Admissions Consultant at The Essay Expert, for contributing instructions for Mac users.] Dont Submit a Document that Looks Like This! Why I Love Track Changes Microsoft Word has a very useful feature called ââ¬Å"Track Changesâ⬠that keeps track of changes that an editor makes to a document, and allows subsequent readers to see what changes were made. When the ââ¬Å"Track Changesâ⬠feature is turned on, anyone who opens the document can see every change made to the original document, whether to fonts, page formats, margins, and text. Track Changes also has a ââ¬Å"Commentsâ⬠feature that allows explanations and suggestions to be entered in the margins of your document. The value of Track Changes to me as an editor is that my clients can see what Iââ¬â¢ve changed, and I can see the changes they make. I do not then have to go through their resume word by word to see what alterations have occurred. Itââ¬â¢s also easy to accept or reject changes, without having to change individual fonts or colors. Gone are the days of manually inserting a strikethrough to indicate a deletion! The Dangers of Track Changes Track Changes can be troublesome too. You donââ¬â¢t want to send a document with lots of red lines and bubbles all over it to an employer or a school (many people have embarrassing stories of doing this)! The recipient then sees all the suggestions, changes, and possibly the original language and mistakes that needed changing. As part of proofreading and preparing the final draft of a resume, cover letter, or essay, take the following steps to ensure that you do not inadvertently send a marked up copy to an employer: Directions for MS Word 1)à Check to see if there are any comments or tracked changes in the document: Go to the ââ¬Å"Reviewâ⬠tab and click on the window that says ââ¬Å"Final Showing Markup.â⬠à Go to the ââ¬Å"Show Markupâ⬠menu and make sure there are check marks in all the boxes (otherwise you might not see the comments or formatting changes when you look at ââ¬Å"Final Showing Markupâ⬠) NOTE:à If the window says ââ¬Å"Finalâ⬠and you do not see any redlines, this does not mean they are gone! Make sure you are viewing the markups before determining that your document is clean. 2)à If you do not see any changes or comments and you do not make any other changes to the document, youââ¬â¢re good to go. 3)à However, if you do see comments and tracked changes, you can do one of two things: Change ââ¬Å"Final: Show Markupâ⬠to ââ¬Å"Finalâ⬠and save the final document as a PDF. This solution works if the place youââ¬â¢re submitting your resume accepts .pdf files. Accept all the tracked changes and delete all edits and comments (unless you only want to accept some of them, in which case see step 4). NOTE: You need to delete edits SEPARATELY from comments! Under the ââ¬Å"Reviewâ⬠tab, go to ââ¬Å"Acceptâ⬠icon and accept all changes. Under the ââ¬Å"Reviewâ⬠tab, go to the icon that says ââ¬Å"Deleteâ⬠(next to the ââ¬Å"New Commentâ⬠icon, and click ââ¬Å"Delete All Comments in Document.â⬠4)à If you want to accept some changes and delete others, you can accept or reject changes and comments one at a time by right clicking on them individually. You will get a drop-down menu with choices of what to do. 5)à Repeat Step 1. Directions for MS Word 2008 for Mac 1)à Check to see if there are any comments or tracked changes in the document: Go to the ââ¬Å"Reviewâ⬠tab and find the Markup Optionsà drop-down menu. Make sure there are check marks next to the first three items shownà (otherwise you might not see the comments or formatting changes when you look at ââ¬Å"Final Showing Markup.â⬠) 2)à If you do not see any changes or comments and you do not make any other changes to the document, youââ¬â¢re good to go. 3)à However, if you do see comments and tracked changes, you can do one of two things: 1. Change ââ¬Å"All Markupâ⬠to ââ¬Å"No Markupâ⬠and save the final document as a PDF. This solution works if the place youââ¬â¢re submitting your resume accepts .pdf files. 2. Accept all the tracked changes and delete all edits and comments (unless you only want to accept some of them, in which case see step 4). NOTE:à You need to delete edits SEPARATELY from comments! Go to the Acceptà menu with the green checkmark, and select Accept All Changes. Go to the Deleteà menu with the red X, and select ââ¬Å"Deleteà All Comments in Document.â⬠4)à If you want to accept some changes and delete others, you can accept or reject changes and comments one at a time by clicking on the icons with the left arrow or right arrow to move to the previous or next change and then click on the drop-down menus with the green checkmark or red X toà accept or reject each individually. 5)à Repeat Step 1. Important notes for all versions of Word: If you accept all changes before reviewing the document and there is a comment in the middle of your document like ââ¬Å"(dates?)â⬠then that change will be accepted and become a part of your document! Make sure you respond to all questions and make any revisions needed inside your document before accepting all changes. *ALWAYS* proofread your final document at least 3 times!à As much as The Essay Expert and other editors attempt to ensure that your documents are perfect, final approval is ultimately your responsibility. If you donââ¬â¢t want all your future edits to show up as marked on your document, turn Track Changes off by clicking on it.à Itââ¬â¢s a toggled function.à Click it on, click it off. Finally, when you receive an edited document, whenever possible accept or reject the changes before making your own edits!à This practice will make it much easier to look at the NEW edits you have made to the document. Have Track Changes questions? Embarrassing Track Changes stories? Please share in the Comments below! Save
Tuesday, November 5, 2019
River Birch Is a Favored Yard Tree in the Southern U.S.
River Birch Is a Favored Yard Tree in the Southern U.S. River birch has been called the most beautiful of American trees by Prince Maximilian, the emperor of Mexico when he toured North America shortly before his short-lived reign. It is a favorite yard tree in the southern United States and is sometimes messy to maintain if you are not hands-on when dealing with your yard. Betula nigra, also known as red birch, water birch, or black birch, is the only birch with a range that includes the southeastern coastal plain. It is uniquely the only spring-fruiting birch in North America. Although the wood has limited usefulness, the trees beauty makes it an ornamental highlight, especially at the northern and western extremes of its natural range.à Most river birch bark peels in colorful flakes of brown, salmon, peach, orange, and lavender and is a bonus for regions deprived of paper and white birches.à à In his book, The Urban Tree Book, journalist, novelist, and publisher Arthur Plotnik entices amateur arborists to go tree peeping in U.S. cities. He gives vivid descriptions of trees he spots along his trek: Only the shaggy brown river birch seems truly adapted to cities, holding its own with urban heat blasts and the deadly borer. River Birch Habit and Range River birch grows naturally from southern New Hampshire south and west to the Texas Gulf Coast.à River birch is well named as it loves riparian (wet) zones, adapts well to wet sites, and reaches its maximum size in rich alluvial soils of the lower Mississippi Valley. Even though it loves wet ecosystems, the tree is heat-tolerant. River birch can survive modest droughts and does not compete with your lawn for water. River birch transplants easily at any age and grows into a medium tree of about 40 feet and rarely to 70 feet. River birch occupies large eastern north-south ranges in North America from Minnesota to Florida. The tree needs direct sunlight and is intolerant to shade.à River Birch Varieties The best river birch cultivars are the Heritage and Dura-Heat varieties. The Heritage or Cully cultivar was selected in 2002 as the tree of the year by the Society of Municipal Arborists. The trees wood has very little commercial value but is extremely popular as an ornamental tree that features salmon-cream to brownish bark that peels to reveal a creamy white inner bark that can be nearly as white as the white-barked birches. It is hardyà inà all U.S. climate zones, it is fast-growing, nicely forked, wind and ice resistant.à According to Michael Dirr,à horticulturist and a professor of horticulture at the University of Georgia, who praise the varietal in his book, Trees: Heritage river birch is an excellent selection with superior vigor, larger leaves, and greater resistance to leaf spot. Dura-Heat is a somewhat smaller cultivar that features creamy white bark color, better tolerance to summer heat, better insect and disease resistance, and superior foliage to the species. It typically grows 30 to 40 feet tall as a single trunk or multi-trunked tree. Leaves, Flowers, and Fruit of a River Birch The tree has male and female catkins, which are slim, cylindricalà flowerà clusters that are grouped in 3s. The small cone-like fruità opens and sheds smallà nutletà seeds in spring. What makes yard work a chore with the river birch are the falling catkins, fruit, and flaking bark that constantly litter the yard. The summer leaves have a leathery texture with a dark green upper side and light green on itsà underside. The leaf edges are teethlike, with a double serrated appearance. The leaves are in the shape of ovals. In the autumn, the leaf color is golden-yellow to yellow-brown, and leaves have a tendency to drop quickly. River Birch Hardiness Zone River birchà is hardy through zone 4 on the U.S. Department of Agriculture zone map. The USDA Hardiness Zone Map identifies how well plants will withstand the cold winter temperatures. The map divides North America into 13 zones, of 10 degrees each, ranging from -60 F to 70 F. So, for zone 4, the minimum average temperatures are between -30 F and -20 F, which includes the entire U.S. with the exception of Alaska.
Sunday, November 3, 2019
The promotion of intangible products with event marketing Research Paper - 1
The promotion of intangible products with event marketing - Research Paper Example Consumers are seeking for more intangible value, while the banking sector is looking for greater, more productive means to market their intangible products/services to customers. This pursuit leads the banking sector to the path of event marketing, which is a very valuable, needs-based method to satisfy customersââ¬â¢ intangible needs and demands. Event marketing is derived from the observation of the behavior of customers through thorough data examination. These customer patterns may embody a time of need of a customer, which, once identified in a prompt way, tenders a vast prospect to provide intangible products/services to that customer (Harrison, 2000). An increasing number of banking organizations are already generating substantial returns from investing on event marketing activities. Numerous other financial organizations perform analytic oriented targeting or also referred to as ââ¬Ëtriggered marketingââ¬â¢ and could even apply the same terms (Mayar & Uffenheimer, 200 7). The capability to keep in touch or communicate with each customer promptly or relevantly entails a basis of significant information that is novel and is connected directly and routinely to service and sales channels (Mayar & Uffenheimer, 2007). This is the setting that motivates the biggest profits. The banking sector understands that their most valuable advantage is their customers. It is much profitable or gainful to strengthen the bond with present customers and prevent deficiency, in contrast to attracting new customers (Ennew & Waite, 2006). This essay will discuss the promotion of intangible products/services, such as those of the banking sector, through event marketing. Promoting Intangible Products through Event Marketing Intangible products, such as information, are a very extensive concept. Situated in the current terminology, a primary point of similarity in the marketing of tangibles and intangibles gravitate around the extent of intangibility innate in both forms (G ummesson, 2002). Marketing is focused on drawing the attention and sustaining customers. The intangibility level of product has its biggest impact in the goal of attracting customers. When it concerns keeping customers, intangible products come across quite specific setbacks (Kitchen & De Pelsmacker, 2004). However, these setbacks are minimized through event marketing. Event marketing is rooted in regularly and methodically monitoring full customer behavior and patters to determine those times where there is a chance to improve a rapport or when a customer is most prepared to reach a choice of intangible product/service purchase (Gummesson, 2002). The objective of event marketing is to facilitate communication in an appropriate and prompt way with customers and to develop services, marketing, and sales around their particular requirements. Event marketing normally makes use of the database and capably rakes through the customer folders to choose the customers with the recognized tri ggers (Mayar & Uffenheimer, 2007). Triggers, in marketing, are employed to routinely communicate suggestions, offers, relevant messages, or other
Thursday, October 31, 2019
Food laws Essay Example | Topics and Well Written Essays - 1000 words
Food laws - Essay Example Being a worker in the food service department I have the obligation to ensure that the menus prepared to suit the needs of the patients. This is because it would be an offense for the hospital to compel the patients to eat foods that are not in line with their dietary laws. In the contemporary society, the aspect of diversity management is of great significance within the health sector. Diversity management seeks to satisfy the needs of different people irrespective of their differences. It requires that all citizens be treated with respect despite their differences in terms of religion, color, race, beliefs or even physical abilities. Since our hospital pays attention diversity management strategies, it is crucial to consider the diet preferences of our clients(Curtis, 2013). Secondly, since a hospital is a business just like any other, satisfying the customers is a matter of priority. Customers prefer to acquire services from businesses that satisfy their needs efficiently. On this note, it is crucial for our organization to design menus that satisfy the needs of the people to ensure that the Kosher and Halal laws are respected especially during relevant holidays. Satisfying the needs of these laws is a complex process due to the great variation of the diet requirements. The food director is also concerned with the economic aspect of satisfying the needs of the patients. Often, it is possible that introducing new meals other than the regular meals will add cost to the meals.
Tuesday, October 29, 2019
Case Study Example | Topics and Well Written Essays - 1250 words - 8
Case Study Example The criminal justice policies in the United States of America are guided by the 1967 Presidents Commission on Law Enforcement and Administration of Justice. One of the greatest and ground breaking achievements of this group was the publication of a report named The Challenge of Crime in a Free Society (Cole Smith & DeJong, 2014). The report has over 200 recommendations pertaining to the US Criminal Justice policies. These recommendations were a part of the comprehensive approach of the commission towards preventing and fighting against the crime in the country. Few of the recommendations made by the commission could find their place in the Omnibus Crime Control and Safe Streets Act of 1968. It was advocated by The Presidentââ¬â¢s Commission that with those recommendations the coordination between the enforcement force, courts and the correctional agencies has seen a great improvement. It is pointed by the Commission that criminal justice is the mean through which both society and the individuals in the country is protected from any crime. The crime committed was a robbery. The victim, Mr. Milton Brown, was robbed at the gunpoint by two assailants named Bertha Bloutt and William Bloutt in Broadway and First Avenue on 11 October 2011. The assailants have robbed the cash that Mr. Brown was taking from the daily deposits of his store BJ Shoes along with his wallet and the car keys1. Mr. Brown reported this crime to Winston police department. It is mandatory that all the criminal justice process follow a sequential approach to reach the justice2. In this section of the essay the sequence of the criminal justice system will be discussed based upon the chosen case study involving the robbery of cash from Milton Brown and the accused are Bertha Bloutt and William Bloutt. Mr. Brown reported the crime for which he became the victim to the Winston police department, which Case Study Example | Topics and Well Written Essays - 1000 words - 30 Case Study Example Customers have since identified the company with high-quality products. The wide range of products ensured they capture a large market share. The company has sought to market its brand using several methods that have seen it grow profoundly. With only about two decades since its inception, the management team has tirelessly worked to see the business gain dominance in the industry. With over $200 million invested each year for the last five years, the company has entrenched itself quite well in the market. The management realized that spending considerably in sport sponsorship as well as in advertisements would make it identify itself in the target market. This has since been done through organizing and sponsoring numerous sport events. In the process, printed jerseys with the company logo have caught the targeted customersââ¬â¢ attention. Additionally, the company has used several media platforms to advertise and reach out to their target market. Electronic media have been mounting intensive campaigns that seek to promote the companyââ¬â¢s brand and ââ¬ËProtect This Houseââ¬â¢ is among them. UnderAmour understands that having clear and effective distribution channels is critical to market success. It is for this reason that it has availed its products in over twenty five thousand retail outlets all over the globe. A large portion of UnderAmourââ¬â¢s products pass through wholesale before getting to the retailers who avail them to the consumers. Between 2011 and 2013, large sales of about seventy percent came from the intermodal sales made to large store retailers. Moreover, the company also engages in ââ¬Ëdirect to consumer salesââ¬â¢. This involves enabling consumers to directly acquire the products from the stores within the factory. An equally big percentage has been recorded through this; the highest being thirty percent. Worth noting is the e commerce trade in which customer can order or products and shop online. The company has also
Sunday, October 27, 2019
Data Pre-processing Tool
Data Pre-processing Tool Chapter- 2 Real life data rarely comply with the necessities of various data mining tools. It is usually inconsistent and noisy. It may contain redundant attributes, unsuitable formats etc. Hence data has to be prepared vigilantly before the data mining actually starts. It is well known fact that success of a data mining algorithm is very much dependent on the quality of data processing. Data processing is one of the most important tasks in data mining. In this context it is natural that data pre-processing is a complicated task involving large data sets. Sometimes data pre-processing take more than 50% of the total time spent in solving the data mining problem. It is crucial for data miners to choose efficient data preprocessing technique for specific data set which can not only save processing time but also retain the quality of the data for data mining process. A data pre-processing tool should help miners with many data mining activates. For example, data may be provided in different formats as discussed in previous chapter (flat files, database files etc). Data files may also have different formats of values, calculation of derived attributes, data filters, joined data sets etc. Data mining process generally starts with understanding of data. In this stage pre-processing tools may help with data exploration and data discovery tasks. Data processing includes lots of tedious works, Data pre-processing generally consists of Data Cleaning Data Integration Data Transformation And Data Reduction. In this chapter we will study all these data pre-processing activities. 2.1 Data Understanding In Data understanding phase the first task is to collect initial data and then proceed with activities in order to get well known with data, to discover data quality problems, to discover first insight into the data or to identify interesting subset to form hypothesis for hidden information. The data understanding phase according to CRISP model can be shown in following . 2.1.1 Collect Initial Data The initial collection of data includes loading of data if required for data understanding. For instance, if specific tool is applied for data understanding, it makes great sense to load your data into this tool. This attempt possibly leads to initial data preparation steps. However if data is obtained from multiple data sources then integration is an additional issue. 2.1.2 Describe data Here the gross or surface properties of the gathered data are examined. 2.1.3 Explore data This task is required to handle the data mining questions, which may be addressed using querying, visualization and reporting. These include: Sharing of key attributes, for instance the goal attribute of a prediction task Relations between pairs or small numbers of attributes Results of simple aggregations Properties of important sub-populations Simple statistical analyses. 2.1.4 Verify data quality In this step quality of data is examined. It answers questions such as: Is the data complete (does it cover all the cases required)? Is it accurate or does it contains errors and if there are errors how common are they? Are there missing values in the data? If so how are they represented, where do they occur and how common are they? 2.2 Data Preprocessing Data preprocessing phase focus on the pre-processing steps that produce the data to be mined. Data preparation or preprocessing is one most important step in data mining. Industrial practice indicates that one data is well prepared; the mined results are much more accurate. This means this step is also a very critical fro success of data mining method. Among others, data preparation mainly involves data cleaning, data integration, data transformation, and reduction. 2.2.1 Data Cleaning Data cleaning is also known as data cleansing or scrubbing. It deals with detecting and removing inconsistencies and errors from data in order to get better quality data. While using a single data source such as flat files or databases data quality problems arises due to misspellings while data entry, missing information or other invalid data. While the data is taken from the integration of multiple data sources such as data warehouses, federated database systems or global web-based information systems, the requirement for data cleaning increases significantly. This is because the multiple sources may contain redundant data in different formats. Consolidation of different data formats abs elimination of redundant information becomes necessary in order to provide access to accurate and consistent data. Good quality data requires passing a set of quality criteria. Those criteria include: Accuracy: Accuracy is an aggregated value over the criteria of integrity, consistency and density. Integrity: Integrity is an aggregated value over the criteria of completeness and validity. Completeness: completeness is achieved by correcting data containing anomalies. Validity: Validity is approximated by the amount of data satisfying integrity constraints. Consistency: consistency concerns contradictions and syntactical anomalies in data. Uniformity: it is directly related to irregularities in data. Density: The density is the quotient of missing values in the data and the number of total values ought to be known. Uniqueness: uniqueness is related to the number of duplicates present in the data. 2.2.1.1 Terms Related to Data Cleaning Data cleaning: data cleaning is the process of detecting, diagnosing, and editing damaged data. Data editing: data editing means changing the value of data which are incorrect. Data flow: data flow is defined as passing of recorded information through succeeding information carriers. Inliers: Inliers are data values falling inside the projected range. Outlier: outliers are data value falling outside the projected range. Robust estimation: evaluation of statistical parameters, using methods that are less responsive to the effect of outliers than more conventional methods are called robust method. 2.2.1.2 Definition: Data Cleaning Data cleaning is a process used to identify imprecise, incomplete, or irrational data and then improving the quality through correction of detected errors and omissions. This process may include format checks Completeness checks Reasonableness checks Limit checks Review of the data to identify outliers or other errors Assessment of data by subject area experts (e.g. taxonomic specialists). By this process suspected records are flagged, documented and checked subsequently. And finally these suspected records can be corrected. Sometimes validation checks also involve checking for compliance against applicable standards, rules, and conventions. The general framework for data cleaning given as: Define and determine error types; Search and identify error instances; Correct the errors; Document error instances and error types; and Modify data entry procedures to reduce future errors. Data cleaning process is referred by different people by a number of terms. It is a matter of preference what one uses. These terms include: Error Checking, Error Detection, Data Validation, Data Cleaning, Data Cleansing, Data Scrubbing and Error Correction. We use Data Cleaning to encompass three sub-processes, viz. Data checking and error detection; Data validation; and Error correction. A fourth improvement of the error prevention processes could perhaps be added. 2.2.1.3 Problems with Data Here we just note some key problems with data Missing data : This problem occur because of two main reasons Data are absent in source where it is expected to be present. Some times data is present are not available in appropriately form Detecting missing data is usually straightforward and simpler. Erroneous data: This problem occurs when a wrong value is recorded for a real world value. Detection of erroneous data can be quite difficult. (For instance the incorrect spelling of a name) Duplicated data : This problem occur because of two reasons Repeated entry of same real world entity with some different values Some times a real world entity may have different identifications. Repeat records are regular and frequently easy to detect. The different identification of the same real world entities can be a very hard problem to identify and solve. Heterogeneities: When data from different sources are brought together in one analysis problem heterogeneity may occur. Heterogeneity could be Structural heterogeneity arises when the data structures reflect different business usage Semantic heterogeneity arises when the meaning of data is different n each system that is being combined Heterogeneities are usually very difficult to resolve since because they usually involve a lot of contextual data that is not well defined as metadata. Information dependencies in the relationship between the different sets of attribute are commonly present. Wrong cleaning mechanisms can further damage the information in the data. Various analysis tools handle these problems in different ways. Commercial offerings are available that assist the cleaning process, but these are often problem specific. Uncertainty in information systems is a well-recognized hard problem. In following a very simple examples of missing and erroneous data is shown Extensive support for data cleaning must be provided by data warehouses. Data warehouses have high probability of ââ¬Å"dirty dataâ⬠since they load and continuously refresh huge amounts of data from a variety of sources. Since these data warehouses are used for strategic decision making therefore the correctness of their data is important to avoid wrong decisions. The ETL (Extraction, Transformation, and Loading) process for building a data warehouse is illustrated in following . Data transformations are related with schema or data translation and integration, and with filtering and aggregating data to be stored in the data warehouse. All data cleaning is classically performed in a separate data performance area prior to loading the transformed data into the warehouse. A large number of tools of varying functionality are available to support these tasks, but often a significant portion of the cleaning and transformation work has to be done manually or by low-level programs that are difficult to write and maintain. A data cleaning method should assure following: It should identify and eliminate all major errors and inconsistencies in an individual data sources and also when integrating multiple sources. Data cleaning should be supported by tools to bound manual examination and programming effort and it should be extensible so that can cover additional sources. It should be performed in association with schema related data transformations based on metadata. Data cleaning mapping functions should be specified in a declarative way and be reusable for other data sources. 2.2.1.4 Data Cleaning: Phases 1. Analysis: To identify errors and inconsistencies in the database there is a need of detailed analysis, which involves both manual inspection and automated analysis programs. This reveals where (most of) the problems are present. 2. Defining Transformation and Mapping Rules: After discovering the problems, this phase are related with defining the manner by which we are going to automate the solutions to clean the data. We will find various problems that translate to a list of activities as a result of analysis phase. Example: Remove all entries for J. Smith because they are duplicates of John Smith Find entries with `bule in colour field and change these to `blue. Find all records where the Phone number field does not match the pattern (NNNNN NNNNNN). Further steps for cleaning this data are then applied. Etc â⬠¦ 3. Verification: In this phase we check and assess the transformation plans made in phase- 2. Without this step, we may end up making the data dirtier rather than cleaner. Since data transformation is the main step that actually changes the data itself so there is a need to be sure that the applied transformations will do it correctly. Therefore test and examine the transformation plans very carefully. Example: Let we have a very thick C++ book where it says strict in all the places where it should say struct 4. Transformation: Now if it is sure that cleaning will be done correctly, then apply the transformation verified in last step. For large database, this task is supported by a variety of tools Backflow of Cleaned Data: In a data mining the main objective is to convert and move clean data into target system. This asks for a requirement to purify legacy data. Cleansing can be a complicated process depending on the technique chosen and has to be designed carefully to achieve the objective of removal of dirty data. Some methods to accomplish the task of data cleansing of legacy system include: n Automated data cleansing n Manual data cleansing n The combined cleansing process 2.2.1.5 Missing Values Data cleaning addresses a variety of data quality problems, including noise and outliers, inconsistent data, duplicate data, and missing values. Missing values is one important problem to be addressed. Missing value problem occurs because many tuples may have no record for several attributes. For Example there is a customer sales database consisting of a whole bunch of records (lets say around 100,000) where some of the records have certain fields missing. Lets say customer income in sales data may be missing. Goal here is to find a way to predict what the missing data values should be (so that these can be filled) based on the existing data. Missing data may be due to following reasons Equipment malfunction Inconsistent with other recorded data and thus deleted Data not entered due to misunderstanding Certain data may not be considered important at the time of entry Not register history or changes of the data How to Handle Missing Values? Dealing with missing values is a regular question that has to do with the actual meaning of the data. There are various methods for handling missing entries 1. Ignore the data row. One solution of missing values is to just ignore the entire data row. This is generally done when the class label is not there (here we are assuming that the data mining goal is classification), or many attributes are missing from the row (not just one). But if the percentage of such rows is high we will definitely get a poor performance. 2. Use a global constant to fill in for missing values. We can fill in a global constant for missing values such as unknown, N/A or minus infinity. This is done because at times is just doesnt make sense to try and predict the missing value. For example if in customer sales database if, say, office address is missing for some, filling it in doesnt make much sense. This method is simple but is not full proof. 3. Use attribute mean. Let say if the average income of a a family is X you can use that value to replace missing income values in the customer sales database. 4. Use attribute mean for all samples belonging to the same class. Lets say you have a cars pricing DB that, among other things, classifies cars to Luxury and Low budget and youre dealing with missing values in the cost field. Replacing missing cost of a luxury car with the average cost of all luxury cars is probably more accurate then the value youd get if you factor in the low budget 5. Use data mining algorithm to predict the value. The value can be determined using regression, inference based tools using Bayesian formalism, decision trees, clustering algorithms etc. 2.2.1.6 Noisy Data Noise can be defined as a random error or variance in a measured variable. Due to randomness it is very difficult to follow a strategy for noise removal from the data. Real world data is not always faultless. It can suffer from corruption which may impact the interpretations of the data, models created from the data, and decisions made based on the data. Incorrect attribute values could be present because of following reasons Faulty data collection instruments Data entry problems Duplicate records Incomplete data: Inconsistent data Incorrect processing Data transmission problems Technology limitation. Inconsistency in naming convention Outliers How to handle Noisy Data? The methods for removing noise from data are as follows. 1. Binning: this approach first sort data and partition it into (equal-frequency) bins then one can smooth it using- Bin means, smooth using bin median, smooth using bin boundaries, etc. 2. Regression: in this method smoothing is done by fitting the data into regression functions. 3. Clustering: clustering detect and remove outliers from the data. 4. Combined computer and human inspection: in this approach computer detects suspicious values which are then checked by human experts (e.g., this approach deal with possible outliers).. Following methods are explained in detail as follows: Binning: Data preparation activity that converts continuous data to discrete data by replacing a value from a continuous range with a bin identifier, where each bin represents a range of values. For instance, age can be changed to bins such as 20 or under, 21-40, 41-65 and over 65. Binning methods smooth a sorted data set by consulting values around it. This is therefore called local smoothing. Let consider a binning example Binning Methods n Equal-width (distance) partitioning Divides the range into N intervals of equal size: uniform grid if A and B are the lowest and highest values of the attribute, the width of intervals will be: W = (B-A)/N. The most straightforward, but outliers may dominate presentation Skewed data is not handled well n Equal-depth (frequency) partitioning 1. It divides the range (values of a given attribute) into N intervals, each containing approximately same number of samples (elements) 2. Good data scaling 3. Managing categorical attributes can be tricky. n Smooth by bin means- Each bin value is replaced by the mean of values n Smooth by bin medians- Each bin value is replaced by the median of values n Smooth by bin boundaries Each bin value is replaced by the closest boundary value Example Let Sorted data for price (in dollars): 4, 8, 9, 15, 21, 21, 24, 25, 26, 28, 29, 34 n Partition into equal-frequency (equi-depth) bins: o Bin 1: 4, 8, 9, 15 o Bin 2: 21, 21, 24, 25 o Bin 3: 26, 28, 29, 34 n Smoothing by bin means: o Bin 1: 9, 9, 9, 9 ( for example mean of 4, 8, 9, 15 is 9) o Bin 2: 23, 23, 23, 23 o Bin 3: 29, 29, 29, 29 n Smoothing by bin boundaries: o Bin 1: 4, 4, 4, 15 o Bin 2: 21, 21, 25, 25 o Bin 3: 26, 26, 26, 34 Regression: Regression is a DM technique used to fit an equation to a dataset. The simplest form of regression is linear regression which uses the formula of a straight line (y = b+ wx) and determines the suitable values for b and w to predict the value of y based upon a given value of x. Sophisticated techniques, such as multiple regression, permit the use of more than one input variable and allow for the fitting of more complex models, such as a quadratic equation. Regression is further described in subsequent chapter while discussing predictions. Clustering: clustering is a method of grouping data into different groups , so that data in each group share similar trends and patterns. Clustering constitute a major class of data mining algorithms. These algorithms automatically partitions the data space into set of regions or cluster. The goal of the process is to find all set of similar examples in data, in some optimal fashion. Following shows three clusters. Values that fall outsid e the cluster are outliers. 4. Combined computer and human inspection: These methods find the suspicious values using the computer programs and then they are verified by human experts. By this process all outliers are checked. 2.2.1.7 Data cleaning as a process Data cleaning is the process of Detecting, Diagnosing, and Editing Data. Data cleaning is a three stage method involving repeated cycle of screening, diagnosing, and editing of suspected data abnormalities. Many data errors are detected by the way during study activities. However, it is more efficient to discover inconsistencies by actively searching for them in a planned manner. It is not always right away clear whether a data point is erroneous. Many times it requires careful examination. Likewise, missing values require additional check. Therefore, predefined rules for dealing with errors and true missing and extreme values are part of good practice. One can monitor for suspect features in survey questionnaires, databases, or analysis data. In small studies, with the examiner intimately involved at all stages, there may be small or no difference between a database and an analysis dataset. During as well as after treatment, the diagnostic and treatment phases of cleaning need insight into the sources and types of errors at all stages of the study. Data flow concept is therefore crucial in this respect. After measurement the research data go through repeated steps of- entering into information carriers, extracted, and transferred to other carriers, edited, selected, transformed, summarized, and presented. It is essential to understand that errors can occur at any stage of the data flow, including during data cleaning itself. Most of these problems are due to human error. Inaccuracy of a single data point and measurement may be tolerable, and associated to the inherent technological error of the measurement device. Therefore the process of data clenaning mus focus on those errors that are beyond small technical variations and that form a major shift within or beyond the population distribution. In turn, it must be based on understanding of technical errors and expected ranges of normal values. Some errors are worthy of higher priority, but which ones are most significant is highly study-specific. For instance in most medical epidemiological studies, errors that need to be cleaned, at all costs, include missing gender, gender misspecification, birth date or examination date errors, duplications or merging of records, and biologically impossible results. Another example is in nutrition studies, date errors lead to age errors, which in turn lead to errors in weight-for-age scoring and, further, to misclassification of subjects as under- or overweight. Errors of sex and date are particularly important because they contaminate derived variables. Prioritization is essential if the study is under time pressures or if resources for data cleaning are limited. 2.2.2 Data Integration This is a process of taking data from one or more sources and mapping it, field by field, onto a new data structure. Idea is to combine data from multiple sources into a coherent form. Various data mining projects requires data from multiple sources because n Data may be distributed over different databases or data warehouses. (for example an epidemiological study that needs information about hospital admissions and car accidents) n Sometimes data may be required from different geographic distributions, or there may be need for historical data. (e.g. integrate historical data into a new data warehouse) n There may be a necessity of enhancement of data with additional (external) data. (for improving data mining precision) 2.2.2.1 Data Integration Issues There are number of issues in data integrations. Consider two database tables. Imagine two database tables Database Table-1 Database Table-2 In integration of there two tables there are variety of issues involved such as 1. The same attribute may have different names (for example in above tables Name and Given Name are same attributes with different names) 2. An attribute may be derived from another (for example attribute Age is derived from attribute DOB) 3. Attributes might be redundant( For example attribute PID is redundant) 4. Values in attributes might be different (for example for PID 4791 values in second and third field are different in both the tables) 5. Duplicate records under different keys( there is a possibility of replication of same record with different key values) Therefore schema integration and object matching can be trickier. Question here is how equivalent entities from different sources are matched? This problem is known as entity identification problem. Conflicts have to be detected and resolved. Integration becomes easier if unique entity keys are available in all the data sets (or tables) to be linked. Metadata can help in schema integration (example of metadata for each attribute includes the name, meaning, data type and range of values permitted for the attribute) 2.2.2.1 Redundancy Redundancy is another important issue in data integration. Two given attribute (such as DOB and age for instance in give table) may be redundant if one is derived form the other attribute or set of attributes. Inconsistencies in attribute or dimension naming can lead to redundancies in the given data sets. Handling Redundant Data We can handle data redundancy problems by following ways n Use correlation analysis n Different coding / representation has to be considered (e.g. metric / imperial measures) n Careful (manual) integration of the data can reduce or prevent redundancies (and inconsistencies) n De-duplication (also called internal data linkage) o If no unique entity keys are available o Analysis of values in attributes to find duplicates n Process redundant and inconsistent data (easy if values are the same) o Delete one of the values o Average values (only for numerical attributes) o Take majority values (if more than 2 duplicates and some values are the same) Correlation analysis is explained in detail here. Correlation analysis (also called Pearsons product moment coefficient): some redundancies can be detected by using correlation analysis. Given two attributes, such analysis can measure how strong one attribute implies another. For numerical attribute we can compute correlation coefficient of two attributes A and B to evaluate the correlation between them. This is given by Where n n is the number of tuples, n and are the respective means of A and B n ÃÆ'A and ÃÆ'B are the respective standard deviation of A and B n à £(AB) is the sum of the AB cross-product. a. If -1 b. If rA, B is equal to zero it indicates A and B are independent of each other and there is no correlation between them. c. If rA, B is less than zero then A and B are negatively correlated. , where if value of one attribute increases value of another attribute decreases. This means that one attribute discourages another attribute. It is important to note that correlation does not imply causality. That is, if A and B are correlated, this does not essentially mean that A causes B or that B causes A. for example in analyzing a demographic database, we may find that attribute representing number of accidents and the number of car theft in a region are correlated. This does not mean that one is related to another. Both may be related to third attribute, namely population. For discrete data, a correlation relation between two attributes, can be discovered by a Ãâ¡Ã ²(chi-square) test. Let A has c distinct values a1,a2,â⬠¦Ã¢â¬ ¦ac and B has r different values namely b1,b2,â⬠¦Ã¢â¬ ¦br The data tuple described by A and B are shown as contingency table, with c values of A (making up columns) and r values of B( making up rows). Each and every (Ai, Bj) cell in table has. X^2 = sum_{i=1}^{r} sum_{j=1}^{c} {(O_{i,j} E_{i,j})^2 over E_{i,j}} . Where n Oi, j is the observed frequency (i.e. actual count) of joint event (Ai, Bj) and n Ei, j is the expected frequency which can be computed as E_{i,j}=frac{sum_{k=1}^{c} O_{i,k} sum_{k=1}^{r} O_{k,j}}{N} , , Where n N is number of data tuple n Oi,k is number of tuples having value ai for A n Ok,j is number of tuples having value bj for B The larger the Ãâ¡Ã ² value, the more likely the variables are related. The cells that contribute the most to the Ãâ¡Ã ² value are those whose actual count is very different from the expected count Chi-Square Calculation: An Example Suppose a group of 1,500 people were surveyed. The gender of each person was noted. Each person has polled their preferred type of reading material as fiction or non-fiction. The observed frequency of each possible joint event is summarized in following table.( number in parenthesis are expected frequencies) . Calculate chi square. Play chess Not play chess Sum (row) Like science fiction 250(90) 200(360) 450 Not like science fiction 50(210) 1000(840) 1050 Sum(col.) 300 1200 1500 E11 = count (male)*count(fiction)/N = 300 * 450 / 1500 =90 and so on For this table the degree of freedom are (2-1)(2-1) =1 as table is 2X2. for 1 degree of freedom , the Ãâ¡Ã ² value needed to reject the hypothesis at the 0.001 significance level is 10.828 (taken from the table of upper percentage point of the Ãâ¡Ã ² distribution typically available in any statistic text book). Since the computed value is above this, we can reject the hypothesis that gender and preferred reading are independent and conclude that two attributes are strongly correlated for given group. Duplication must also be detected at the tuple level. The use of renormalized tables is also a source of redundancies. Redundancies may further lead to data inconsistencies (due to updating some but not others). 2.2.2.2 Detection and resolution of data value conflicts Another significant issue in data integration is the discovery and resolution of data value conflicts. For example, for the same entity, attribute values from different sources may differ. For example weight can be stored in metric unit in one source and British imperial unit in another source. For instance, for a hotel cha Data Pre-processing Tool Data Pre-processing Tool Chapter- 2 Real life data rarely comply with the necessities of various data mining tools. It is usually inconsistent and noisy. It may contain redundant attributes, unsuitable formats etc. Hence data has to be prepared vigilantly before the data mining actually starts. It is well known fact that success of a data mining algorithm is very much dependent on the quality of data processing. Data processing is one of the most important tasks in data mining. In this context it is natural that data pre-processing is a complicated task involving large data sets. Sometimes data pre-processing take more than 50% of the total time spent in solving the data mining problem. It is crucial for data miners to choose efficient data preprocessing technique for specific data set which can not only save processing time but also retain the quality of the data for data mining process. A data pre-processing tool should help miners with many data mining activates. For example, data may be provided in different formats as discussed in previous chapter (flat files, database files etc). Data files may also have different formats of values, calculation of derived attributes, data filters, joined data sets etc. Data mining process generally starts with understanding of data. In this stage pre-processing tools may help with data exploration and data discovery tasks. Data processing includes lots of tedious works, Data pre-processing generally consists of Data Cleaning Data Integration Data Transformation And Data Reduction. In this chapter we will study all these data pre-processing activities. 2.1 Data Understanding In Data understanding phase the first task is to collect initial data and then proceed with activities in order to get well known with data, to discover data quality problems, to discover first insight into the data or to identify interesting subset to form hypothesis for hidden information. The data understanding phase according to CRISP model can be shown in following . 2.1.1 Collect Initial Data The initial collection of data includes loading of data if required for data understanding. For instance, if specific tool is applied for data understanding, it makes great sense to load your data into this tool. This attempt possibly leads to initial data preparation steps. However if data is obtained from multiple data sources then integration is an additional issue. 2.1.2 Describe data Here the gross or surface properties of the gathered data are examined. 2.1.3 Explore data This task is required to handle the data mining questions, which may be addressed using querying, visualization and reporting. These include: Sharing of key attributes, for instance the goal attribute of a prediction task Relations between pairs or small numbers of attributes Results of simple aggregations Properties of important sub-populations Simple statistical analyses. 2.1.4 Verify data quality In this step quality of data is examined. It answers questions such as: Is the data complete (does it cover all the cases required)? Is it accurate or does it contains errors and if there are errors how common are they? Are there missing values in the data? If so how are they represented, where do they occur and how common are they? 2.2 Data Preprocessing Data preprocessing phase focus on the pre-processing steps that produce the data to be mined. Data preparation or preprocessing is one most important step in data mining. Industrial practice indicates that one data is well prepared; the mined results are much more accurate. This means this step is also a very critical fro success of data mining method. Among others, data preparation mainly involves data cleaning, data integration, data transformation, and reduction. 2.2.1 Data Cleaning Data cleaning is also known as data cleansing or scrubbing. It deals with detecting and removing inconsistencies and errors from data in order to get better quality data. While using a single data source such as flat files or databases data quality problems arises due to misspellings while data entry, missing information or other invalid data. While the data is taken from the integration of multiple data sources such as data warehouses, federated database systems or global web-based information systems, the requirement for data cleaning increases significantly. This is because the multiple sources may contain redundant data in different formats. Consolidation of different data formats abs elimination of redundant information becomes necessary in order to provide access to accurate and consistent data. Good quality data requires passing a set of quality criteria. Those criteria include: Accuracy: Accuracy is an aggregated value over the criteria of integrity, consistency and density. Integrity: Integrity is an aggregated value over the criteria of completeness and validity. Completeness: completeness is achieved by correcting data containing anomalies. Validity: Validity is approximated by the amount of data satisfying integrity constraints. Consistency: consistency concerns contradictions and syntactical anomalies in data. Uniformity: it is directly related to irregularities in data. Density: The density is the quotient of missing values in the data and the number of total values ought to be known. Uniqueness: uniqueness is related to the number of duplicates present in the data. 2.2.1.1 Terms Related to Data Cleaning Data cleaning: data cleaning is the process of detecting, diagnosing, and editing damaged data. Data editing: data editing means changing the value of data which are incorrect. Data flow: data flow is defined as passing of recorded information through succeeding information carriers. Inliers: Inliers are data values falling inside the projected range. Outlier: outliers are data value falling outside the projected range. Robust estimation: evaluation of statistical parameters, using methods that are less responsive to the effect of outliers than more conventional methods are called robust method. 2.2.1.2 Definition: Data Cleaning Data cleaning is a process used to identify imprecise, incomplete, or irrational data and then improving the quality through correction of detected errors and omissions. This process may include format checks Completeness checks Reasonableness checks Limit checks Review of the data to identify outliers or other errors Assessment of data by subject area experts (e.g. taxonomic specialists). By this process suspected records are flagged, documented and checked subsequently. And finally these suspected records can be corrected. Sometimes validation checks also involve checking for compliance against applicable standards, rules, and conventions. The general framework for data cleaning given as: Define and determine error types; Search and identify error instances; Correct the errors; Document error instances and error types; and Modify data entry procedures to reduce future errors. Data cleaning process is referred by different people by a number of terms. It is a matter of preference what one uses. These terms include: Error Checking, Error Detection, Data Validation, Data Cleaning, Data Cleansing, Data Scrubbing and Error Correction. We use Data Cleaning to encompass three sub-processes, viz. Data checking and error detection; Data validation; and Error correction. A fourth improvement of the error prevention processes could perhaps be added. 2.2.1.3 Problems with Data Here we just note some key problems with data Missing data : This problem occur because of two main reasons Data are absent in source where it is expected to be present. Some times data is present are not available in appropriately form Detecting missing data is usually straightforward and simpler. Erroneous data: This problem occurs when a wrong value is recorded for a real world value. Detection of erroneous data can be quite difficult. (For instance the incorrect spelling of a name) Duplicated data : This problem occur because of two reasons Repeated entry of same real world entity with some different values Some times a real world entity may have different identifications. Repeat records are regular and frequently easy to detect. The different identification of the same real world entities can be a very hard problem to identify and solve. Heterogeneities: When data from different sources are brought together in one analysis problem heterogeneity may occur. Heterogeneity could be Structural heterogeneity arises when the data structures reflect different business usage Semantic heterogeneity arises when the meaning of data is different n each system that is being combined Heterogeneities are usually very difficult to resolve since because they usually involve a lot of contextual data that is not well defined as metadata. Information dependencies in the relationship between the different sets of attribute are commonly present. Wrong cleaning mechanisms can further damage the information in the data. Various analysis tools handle these problems in different ways. Commercial offerings are available that assist the cleaning process, but these are often problem specific. Uncertainty in information systems is a well-recognized hard problem. In following a very simple examples of missing and erroneous data is shown Extensive support for data cleaning must be provided by data warehouses. Data warehouses have high probability of ââ¬Å"dirty dataâ⬠since they load and continuously refresh huge amounts of data from a variety of sources. Since these data warehouses are used for strategic decision making therefore the correctness of their data is important to avoid wrong decisions. The ETL (Extraction, Transformation, and Loading) process for building a data warehouse is illustrated in following . Data transformations are related with schema or data translation and integration, and with filtering and aggregating data to be stored in the data warehouse. All data cleaning is classically performed in a separate data performance area prior to loading the transformed data into the warehouse. A large number of tools of varying functionality are available to support these tasks, but often a significant portion of the cleaning and transformation work has to be done manually or by low-level programs that are difficult to write and maintain. A data cleaning method should assure following: It should identify and eliminate all major errors and inconsistencies in an individual data sources and also when integrating multiple sources. Data cleaning should be supported by tools to bound manual examination and programming effort and it should be extensible so that can cover additional sources. It should be performed in association with schema related data transformations based on metadata. Data cleaning mapping functions should be specified in a declarative way and be reusable for other data sources. 2.2.1.4 Data Cleaning: Phases 1. Analysis: To identify errors and inconsistencies in the database there is a need of detailed analysis, which involves both manual inspection and automated analysis programs. This reveals where (most of) the problems are present. 2. Defining Transformation and Mapping Rules: After discovering the problems, this phase are related with defining the manner by which we are going to automate the solutions to clean the data. We will find various problems that translate to a list of activities as a result of analysis phase. Example: Remove all entries for J. Smith because they are duplicates of John Smith Find entries with `bule in colour field and change these to `blue. Find all records where the Phone number field does not match the pattern (NNNNN NNNNNN). Further steps for cleaning this data are then applied. Etc â⬠¦ 3. Verification: In this phase we check and assess the transformation plans made in phase- 2. Without this step, we may end up making the data dirtier rather than cleaner. Since data transformation is the main step that actually changes the data itself so there is a need to be sure that the applied transformations will do it correctly. Therefore test and examine the transformation plans very carefully. Example: Let we have a very thick C++ book where it says strict in all the places where it should say struct 4. Transformation: Now if it is sure that cleaning will be done correctly, then apply the transformation verified in last step. For large database, this task is supported by a variety of tools Backflow of Cleaned Data: In a data mining the main objective is to convert and move clean data into target system. This asks for a requirement to purify legacy data. Cleansing can be a complicated process depending on the technique chosen and has to be designed carefully to achieve the objective of removal of dirty data. Some methods to accomplish the task of data cleansing of legacy system include: n Automated data cleansing n Manual data cleansing n The combined cleansing process 2.2.1.5 Missing Values Data cleaning addresses a variety of data quality problems, including noise and outliers, inconsistent data, duplicate data, and missing values. Missing values is one important problem to be addressed. Missing value problem occurs because many tuples may have no record for several attributes. For Example there is a customer sales database consisting of a whole bunch of records (lets say around 100,000) where some of the records have certain fields missing. Lets say customer income in sales data may be missing. Goal here is to find a way to predict what the missing data values should be (so that these can be filled) based on the existing data. Missing data may be due to following reasons Equipment malfunction Inconsistent with other recorded data and thus deleted Data not entered due to misunderstanding Certain data may not be considered important at the time of entry Not register history or changes of the data How to Handle Missing Values? Dealing with missing values is a regular question that has to do with the actual meaning of the data. There are various methods for handling missing entries 1. Ignore the data row. One solution of missing values is to just ignore the entire data row. This is generally done when the class label is not there (here we are assuming that the data mining goal is classification), or many attributes are missing from the row (not just one). But if the percentage of such rows is high we will definitely get a poor performance. 2. Use a global constant to fill in for missing values. We can fill in a global constant for missing values such as unknown, N/A or minus infinity. This is done because at times is just doesnt make sense to try and predict the missing value. For example if in customer sales database if, say, office address is missing for some, filling it in doesnt make much sense. This method is simple but is not full proof. 3. Use attribute mean. Let say if the average income of a a family is X you can use that value to replace missing income values in the customer sales database. 4. Use attribute mean for all samples belonging to the same class. Lets say you have a cars pricing DB that, among other things, classifies cars to Luxury and Low budget and youre dealing with missing values in the cost field. Replacing missing cost of a luxury car with the average cost of all luxury cars is probably more accurate then the value youd get if you factor in the low budget 5. Use data mining algorithm to predict the value. The value can be determined using regression, inference based tools using Bayesian formalism, decision trees, clustering algorithms etc. 2.2.1.6 Noisy Data Noise can be defined as a random error or variance in a measured variable. Due to randomness it is very difficult to follow a strategy for noise removal from the data. Real world data is not always faultless. It can suffer from corruption which may impact the interpretations of the data, models created from the data, and decisions made based on the data. Incorrect attribute values could be present because of following reasons Faulty data collection instruments Data entry problems Duplicate records Incomplete data: Inconsistent data Incorrect processing Data transmission problems Technology limitation. Inconsistency in naming convention Outliers How to handle Noisy Data? The methods for removing noise from data are as follows. 1. Binning: this approach first sort data and partition it into (equal-frequency) bins then one can smooth it using- Bin means, smooth using bin median, smooth using bin boundaries, etc. 2. Regression: in this method smoothing is done by fitting the data into regression functions. 3. Clustering: clustering detect and remove outliers from the data. 4. Combined computer and human inspection: in this approach computer detects suspicious values which are then checked by human experts (e.g., this approach deal with possible outliers).. Following methods are explained in detail as follows: Binning: Data preparation activity that converts continuous data to discrete data by replacing a value from a continuous range with a bin identifier, where each bin represents a range of values. For instance, age can be changed to bins such as 20 or under, 21-40, 41-65 and over 65. Binning methods smooth a sorted data set by consulting values around it. This is therefore called local smoothing. Let consider a binning example Binning Methods n Equal-width (distance) partitioning Divides the range into N intervals of equal size: uniform grid if A and B are the lowest and highest values of the attribute, the width of intervals will be: W = (B-A)/N. The most straightforward, but outliers may dominate presentation Skewed data is not handled well n Equal-depth (frequency) partitioning 1. It divides the range (values of a given attribute) into N intervals, each containing approximately same number of samples (elements) 2. Good data scaling 3. Managing categorical attributes can be tricky. n Smooth by bin means- Each bin value is replaced by the mean of values n Smooth by bin medians- Each bin value is replaced by the median of values n Smooth by bin boundaries Each bin value is replaced by the closest boundary value Example Let Sorted data for price (in dollars): 4, 8, 9, 15, 21, 21, 24, 25, 26, 28, 29, 34 n Partition into equal-frequency (equi-depth) bins: o Bin 1: 4, 8, 9, 15 o Bin 2: 21, 21, 24, 25 o Bin 3: 26, 28, 29, 34 n Smoothing by bin means: o Bin 1: 9, 9, 9, 9 ( for example mean of 4, 8, 9, 15 is 9) o Bin 2: 23, 23, 23, 23 o Bin 3: 29, 29, 29, 29 n Smoothing by bin boundaries: o Bin 1: 4, 4, 4, 15 o Bin 2: 21, 21, 25, 25 o Bin 3: 26, 26, 26, 34 Regression: Regression is a DM technique used to fit an equation to a dataset. The simplest form of regression is linear regression which uses the formula of a straight line (y = b+ wx) and determines the suitable values for b and w to predict the value of y based upon a given value of x. Sophisticated techniques, such as multiple regression, permit the use of more than one input variable and allow for the fitting of more complex models, such as a quadratic equation. Regression is further described in subsequent chapter while discussing predictions. Clustering: clustering is a method of grouping data into different groups , so that data in each group share similar trends and patterns. Clustering constitute a major class of data mining algorithms. These algorithms automatically partitions the data space into set of regions or cluster. The goal of the process is to find all set of similar examples in data, in some optimal fashion. Following shows three clusters. Values that fall outsid e the cluster are outliers. 4. Combined computer and human inspection: These methods find the suspicious values using the computer programs and then they are verified by human experts. By this process all outliers are checked. 2.2.1.7 Data cleaning as a process Data cleaning is the process of Detecting, Diagnosing, and Editing Data. Data cleaning is a three stage method involving repeated cycle of screening, diagnosing, and editing of suspected data abnormalities. Many data errors are detected by the way during study activities. However, it is more efficient to discover inconsistencies by actively searching for them in a planned manner. It is not always right away clear whether a data point is erroneous. Many times it requires careful examination. Likewise, missing values require additional check. Therefore, predefined rules for dealing with errors and true missing and extreme values are part of good practice. One can monitor for suspect features in survey questionnaires, databases, or analysis data. In small studies, with the examiner intimately involved at all stages, there may be small or no difference between a database and an analysis dataset. During as well as after treatment, the diagnostic and treatment phases of cleaning need insight into the sources and types of errors at all stages of the study. Data flow concept is therefore crucial in this respect. After measurement the research data go through repeated steps of- entering into information carriers, extracted, and transferred to other carriers, edited, selected, transformed, summarized, and presented. It is essential to understand that errors can occur at any stage of the data flow, including during data cleaning itself. Most of these problems are due to human error. Inaccuracy of a single data point and measurement may be tolerable, and associated to the inherent technological error of the measurement device. Therefore the process of data clenaning mus focus on those errors that are beyond small technical variations and that form a major shift within or beyond the population distribution. In turn, it must be based on understanding of technical errors and expected ranges of normal values. Some errors are worthy of higher priority, but which ones are most significant is highly study-specific. For instance in most medical epidemiological studies, errors that need to be cleaned, at all costs, include missing gender, gender misspecification, birth date or examination date errors, duplications or merging of records, and biologically impossible results. Another example is in nutrition studies, date errors lead to age errors, which in turn lead to errors in weight-for-age scoring and, further, to misclassification of subjects as under- or overweight. Errors of sex and date are particularly important because they contaminate derived variables. Prioritization is essential if the study is under time pressures or if resources for data cleaning are limited. 2.2.2 Data Integration This is a process of taking data from one or more sources and mapping it, field by field, onto a new data structure. Idea is to combine data from multiple sources into a coherent form. Various data mining projects requires data from multiple sources because n Data may be distributed over different databases or data warehouses. (for example an epidemiological study that needs information about hospital admissions and car accidents) n Sometimes data may be required from different geographic distributions, or there may be need for historical data. (e.g. integrate historical data into a new data warehouse) n There may be a necessity of enhancement of data with additional (external) data. (for improving data mining precision) 2.2.2.1 Data Integration Issues There are number of issues in data integrations. Consider two database tables. Imagine two database tables Database Table-1 Database Table-2 In integration of there two tables there are variety of issues involved such as 1. The same attribute may have different names (for example in above tables Name and Given Name are same attributes with different names) 2. An attribute may be derived from another (for example attribute Age is derived from attribute DOB) 3. Attributes might be redundant( For example attribute PID is redundant) 4. Values in attributes might be different (for example for PID 4791 values in second and third field are different in both the tables) 5. Duplicate records under different keys( there is a possibility of replication of same record with different key values) Therefore schema integration and object matching can be trickier. Question here is how equivalent entities from different sources are matched? This problem is known as entity identification problem. Conflicts have to be detected and resolved. Integration becomes easier if unique entity keys are available in all the data sets (or tables) to be linked. Metadata can help in schema integration (example of metadata for each attribute includes the name, meaning, data type and range of values permitted for the attribute) 2.2.2.1 Redundancy Redundancy is another important issue in data integration. Two given attribute (such as DOB and age for instance in give table) may be redundant if one is derived form the other attribute or set of attributes. Inconsistencies in attribute or dimension naming can lead to redundancies in the given data sets. Handling Redundant Data We can handle data redundancy problems by following ways n Use correlation analysis n Different coding / representation has to be considered (e.g. metric / imperial measures) n Careful (manual) integration of the data can reduce or prevent redundancies (and inconsistencies) n De-duplication (also called internal data linkage) o If no unique entity keys are available o Analysis of values in attributes to find duplicates n Process redundant and inconsistent data (easy if values are the same) o Delete one of the values o Average values (only for numerical attributes) o Take majority values (if more than 2 duplicates and some values are the same) Correlation analysis is explained in detail here. Correlation analysis (also called Pearsons product moment coefficient): some redundancies can be detected by using correlation analysis. Given two attributes, such analysis can measure how strong one attribute implies another. For numerical attribute we can compute correlation coefficient of two attributes A and B to evaluate the correlation between them. This is given by Where n n is the number of tuples, n and are the respective means of A and B n ÃÆ'A and ÃÆ'B are the respective standard deviation of A and B n à £(AB) is the sum of the AB cross-product. a. If -1 b. If rA, B is equal to zero it indicates A and B are independent of each other and there is no correlation between them. c. If rA, B is less than zero then A and B are negatively correlated. , where if value of one attribute increases value of another attribute decreases. This means that one attribute discourages another attribute. It is important to note that correlation does not imply causality. That is, if A and B are correlated, this does not essentially mean that A causes B or that B causes A. for example in analyzing a demographic database, we may find that attribute representing number of accidents and the number of car theft in a region are correlated. This does not mean that one is related to another. Both may be related to third attribute, namely population. For discrete data, a correlation relation between two attributes, can be discovered by a Ãâ¡Ã ²(chi-square) test. Let A has c distinct values a1,a2,â⬠¦Ã¢â¬ ¦ac and B has r different values namely b1,b2,â⬠¦Ã¢â¬ ¦br The data tuple described by A and B are shown as contingency table, with c values of A (making up columns) and r values of B( making up rows). Each and every (Ai, Bj) cell in table has. X^2 = sum_{i=1}^{r} sum_{j=1}^{c} {(O_{i,j} E_{i,j})^2 over E_{i,j}} . Where n Oi, j is the observed frequency (i.e. actual count) of joint event (Ai, Bj) and n Ei, j is the expected frequency which can be computed as E_{i,j}=frac{sum_{k=1}^{c} O_{i,k} sum_{k=1}^{r} O_{k,j}}{N} , , Where n N is number of data tuple n Oi,k is number of tuples having value ai for A n Ok,j is number of tuples having value bj for B The larger the Ãâ¡Ã ² value, the more likely the variables are related. The cells that contribute the most to the Ãâ¡Ã ² value are those whose actual count is very different from the expected count Chi-Square Calculation: An Example Suppose a group of 1,500 people were surveyed. The gender of each person was noted. Each person has polled their preferred type of reading material as fiction or non-fiction. The observed frequency of each possible joint event is summarized in following table.( number in parenthesis are expected frequencies) . Calculate chi square. Play chess Not play chess Sum (row) Like science fiction 250(90) 200(360) 450 Not like science fiction 50(210) 1000(840) 1050 Sum(col.) 300 1200 1500 E11 = count (male)*count(fiction)/N = 300 * 450 / 1500 =90 and so on For this table the degree of freedom are (2-1)(2-1) =1 as table is 2X2. for 1 degree of freedom , the Ãâ¡Ã ² value needed to reject the hypothesis at the 0.001 significance level is 10.828 (taken from the table of upper percentage point of the Ãâ¡Ã ² distribution typically available in any statistic text book). Since the computed value is above this, we can reject the hypothesis that gender and preferred reading are independent and conclude that two attributes are strongly correlated for given group. Duplication must also be detected at the tuple level. The use of renormalized tables is also a source of redundancies. Redundancies may further lead to data inconsistencies (due to updating some but not others). 2.2.2.2 Detection and resolution of data value conflicts Another significant issue in data integration is the discovery and resolution of data value conflicts. For example, for the same entity, attribute values from different sources may differ. For example weight can be stored in metric unit in one source and British imperial unit in another source. For instance, for a hotel cha
Subscribe to:
Comments (Atom)