Saturday, August 31, 2019

Title of your paper

It feels like such a long time since I last saw you. I know I've only been away for only a few weeks, but so far my vacation here at Greece has been great! I'm currently staying at the Academy.I found a friend who shares the same passion with me in philosophy and he asked me to come with him to this place to meet other people. I'm very glad to stay here, I've learned a lot from various people.Anyways, yesterday I was walking along Agora or the city market and found some really great stuffs to buy and take home. In this place I found many people who are trading and talking about gossip (for women) and politics (for men). After hours of walking, I was able to get to the Hephaisteion.   I stayed for a while and witness the beauty of the temple.At noon, the heat at Athens   is topnotch so I stayed at the Stoa for cover. There are two kinds of Stoa, the one I stayed at was the Painted Stoa. It is a long covered hall that is open in one side and is decorated with many beautiful paintin gs.I spent almost an hour there and spent the whole time talking to people who get to engage in a conversation with me. It's really funny though for they will just come up to me and ask me something all of a sudden.I was so dumbstruck. At the right of Agora are the sacred places that can be found in the city like the Theatre of Dionysos, the Askelpieion, and most of all the Acropolis.After a while of wandering around, my attention got caught by a group of man loudly arguing among one another. I happened to ask someone about what's going on and he told me that the male citizen are debating about big decisions that affect the city.This place that I got into was called the Pnyx, the home of the assembly of people. However, I want to go to some quiet place instead and got interested in following young children each carrying small vases. I noticed I was back at the entrance of the city which was the Karameikos for I used the potteries as a landmark.By the end of the day, I decided to spe nd my time at the cemetery and stayed there till sunset. The cemetery was great for it really depicts the culture of the Athenians with their pottery and carvings.But what interests me most is the people's daily activities on the city. By what I've observed, they're almost routinary. Different kinds of people have specific type of jobs to finish each day. Even before the day starts people can be seen from all over the place.I came to ask a man why he was up so early and he said he had a trial and he have to prepare. Women get water from the fountains and traders are up for early trading. By mid-morning, servants are already working on their respective jobs. And by noon, the Agora and almost every plac ein Athens is so busy with people talking, doing business and many more.By mid afternoon, the place starts to quiet down and shops are preparing to close, this is my favorite part of the day in Athens for it is very peaceful and relaxing. And at last by midnight, servants start to rela x and even play games with their friends.I somehow got used to the busy and loud environment here at Greece. Though I'm not quite used to talking about politics all the time, or even some serious or religious topics like religion and life. Athens is a very economic centered city and almost all people has something to trade and has something to offer as livelihood. All over Greece, the people grew olives, grapes, and figs. They kept goats, for milk and cheese. In the plains, where the soil was more rich, they also grew wheat to make bread. Making it smell like a combination of vegetables and farm animals at the same time.I really liked it here so far. I wanted to explore the city more but I guess need to have more time. Nevertheless, today I was invited to go to a friend's house and spend the night there. I'm gonna have to take a look at a Greek house and be amazed once again.I'll write to you again very soon. Take care always. See you!

Friday, August 30, 2019

Culture Analysis of Toyota Essay

ABSTRACT This case study analyses the corporate culture of Toyota by using two theories and then analyze the national cultures of Japan and USA by using two theories and its impact on the corporate culture of Toyota. The models of â€Å"Edgar Schein† and â€Å"Charles Handy† will be used to analyze the corporate culture of Toyota while the models of â€Å"Greet Hofstede† and â€Å"Fons Trompenaars† will be used to analyze the national cultures. Afterwards the case study will discuss the climate of Toyota and the impact of the same to its success. Also the case study will analyze the reasons as to why the Toyota Company had to face failures and whether the company culture had any impact in the same. It will also point out on how the culture of Toyota had become inflexible over a period where the company was facing rapid expansions in to other countries and how it had impacted the success of the company. Finally the author will provide with suggestions and advice as to how Toyota could do in the future on developing their corporate culture. Toyota was established as a commercial vehicle manufacturer in 1937 with a capital of  ¥ 12 million. By 1948 Toyota’s debt was 8 times than its capital value. In 1950s Toyota studies US plants, including Ford, and supermarkets during a 12 week study visit. They see little improvement since his previous trip but use supermarkets as a model for just-in-time production. Toyota entered the US in 1958 by launching its model the Toyopet. It established its first overseas production unit in Brazil in 1959 and entered the European market in 1963. Besides manufacturing, the company started a global network of design and R&D facilities covering the three major car markets of Japan, North America, and Europe. The company underwent rapidexpansion in the 1960s and exported fuel-efficient small cars to different countries across the world. By the early 1970s, Toyota‘s global vehicle production was behind that of only GM and Ford. The oil crisis in the late 1970s gave a major boost to Toyota, with many people shifting to smaller, fuel-efficient cars, where Toyota had a significant presence. In 1988, Toyota opened its first plant in North America in Georgetown, In 2000, Toyota‘s global production exceeded five million vehicles. By November 2003, Toyota‘s market capitalization touched US$ 110 billion. In 2006, Toyota became the third largest car and truck seller in the US, surpassing Chrysler Group LLC13 (Chrysler). In 2007, Toyota with sales of 2.6 million vehicles overthrew Ford from the second position in the US auto market. About two-third of Toyota‘s workforce was located outside Japan at that time. In July 2008, Toyota replaced GM15 as the largest automaker in the world. In the financial year 2008, Toyota emerged as the largest automobile manufacturer in the world. 2. National Culture & Toyota Culture 3.2. What is Culture â€Å"Culture is not something you can manipulate easily. Attempts to grab it and twist it into a new shape never work because you can’t grab it†- Prof.John P. Kotter â€Å"Culture† could be defined as â€Å"the sum total of the beliefs, values, rituals, rules & regulations, techniques, institutions, and artifacts that does characterize human populations†. Sociologists generally talk about the term socialization process, referring to the influence of parents, friends, education, and the interaction with other members of a particular society as the basis for one’s culture. These influences result in learned patterns of behavior common to members of a given society. 3.3. National Culture 3.4.1. National culture according to Fons Trompenaars model Fons Trompenaars teamed with Charles Hampden-Turner and developed a theory on culture. Universalism vs. Particularism – Universalism cultures are strictly rule-based behavioral cultures where particularistic cultures tend to focus more on the exceptional nature of present circumstances. Toyota had been a company who was working on relationship based culture where they have even treated the suppliers as of their own. They value these relationships and trusts that through such practices they will achieve success. Specific vs. diffuse – This the manner which the organization or the culture handles their communications (Low context vs. High context) it is obvious that the Japanese belongs to low context and it was the case in Toyota as well where they value long term relationships with employees and its suppliers. Individualism vs. Collectivism – Individualism is about the rights of the individual. It seeks to let each person grow or fail on their own, and sees group-focus as denuding the individual of their inalienable rights. Communitarianism is about the rights of the group or society. It seeks to put the family, group, company and country before the individual. It sees individualism as selfish and short-sighted. It is clearly proven that Japanese works as groups and all team members and senior managers altogether will decide together on many strategies. Inner-directed vs. Outer-directed (â€Å"Do we control our environment or work with it?†) – An inner-directed culture assumes that thinking is the most powerful tool and that considered ideas and intuitive approaches are the best way. An Outer-directed culture assumes that we live in the ‘real world’ and that is where we should look for our information and decisions. The Japanese culture had strong beliefs on thinking power. Even at Toyota they created their own environment through introducing TPS and Toyota way. 3.4.2. National culture according to Greet Hofstede’s model National cultures can be described according to the analysis of Geert Hofstede. It has five dimensions –†¢ Power Distance, †¢ Individualism, †¢ Masculinity, †¢ Uncertainty Avoidance, †¢ Long-Term Orientation. Japanese national culture had a huge influence in corporate culture of Toyota even though they had their operations stretched towards the other parts of the world. Power Distance – By means which you could understand â€Å"the extent to which the less powerful members of institutions and organizations within a country expect and accept that power is distributed unequally†. As per the table given below, it shows that Japan has more power distance than of USA culture. It’s clear as where all the strategic decisions were taken through the head office of Japan through a hierarchical layer who had more authoritative power. Most of the decisions were dependent on fewer individuals. Individualism – Individualism is the one opposite of collectivism that is the degree to which individuals are integrated into groups. â€Å"Individualism pertains to societies in which the ties between individuals are loose: everyone is expected to look after himself or herself and his or her immediate family.† In Toyota all employees were treated equally important, referred as knowledge workers and everybody was given the freedom to come up with ideas. As per the table given below USA can clearly been seen as individualistic culture where as Japan is more towards Collectivism culture. Masculinity – is the degree to which ‘masculine’ values like competitiveness and the acquisition of wealth are valued over ‘feminine’ values like relationship building and quality of life. According to the table, both Japan and USA are having high Masculinity characteristics but it’s much higher on Japanese cultures. In Toyota, they were obsessed to overtake their competitors and become as the largest automaker in 2008 simply to prove their power proving masculine approach towards their competitors. Uncertainty Avoidance – focuses on the level of society’s tolerance for uncertainty and ambiguity. A High Uncertainty Avoidance ranking indicates the country has a low tolerance for uncertainty and ambiguity. This creates a rule-oriented society that institutes laws, rules, regulations, and controls in order to reduce the amount of uncertainty. Japanese try to avoid uncertainty by planning everything carefully. Japan is a culture that depends on rules, laws and regulations. Japan wants to reduce its risks to the lowest and proceed with changes step by step. The United States scores a 46 compared to the 92 of the Japanese culture. Uncertainty avoidance in the US is relatively low, which can clearly be viewed through the national cultures. In Toyota, you could see that they make all the related parties (Supplier, Designers, Engineers, Dealers and Partners) involved in the manufacturing process right from the designing stage to marketing the product so that they produce exactly what is needed with minimum risk. Long-Term Orientation – focuses on the degree the society does or does not embrace long-term devotion to traditional values. High Long-Term Orientation ranking implies that the country embraces to the values of long-term commitments and respect for tradition and where long-term rewards are expected as a result of today’s hard work. This is very evident as Toyota has spent much revenue and focus on R&D activities even at tougher times. Hofstede’s Dimension of Culture Scales When considering these factors, it is obvious that Toyota (which comprises with Japanese culture embedded to its organizational culture) will have a significant impact to its culture when working in USA as USA culture is much more different to than Japanese culture. 3.4. Culture of Toyota 3.5.3. Toyota’s culture according to Edgar Schein’s Theory Schein’s three levels of culture model were developed in the 1980s. Schein identifies three distinct levels in organizational cultures: 1. Artifacts and behaviors 2. Exposed values 3. Basic Assumptions Artifacts of Toyota – Artifacts are the visible elements in a culture. Artifacts can be easily recognized by people. Artifacts can be dress codes, furniture, art, work climate, stories, work processes, organizational structures etc..Toyota’s artifacts could be * Fuel efficient vehicle manufacturer * Concentrated highly on maintaining quality and minimizing waste. Basic Assumptions of Toyota – Basic Assumptions reflect the shared values which are within the specific culture. These values oftentimes will not be especially visible to the members of the culture or the external parties. Assumptions and espoused values are possibly not correlated, and the espoused values may not at all be rooted in the actual values of the culture. This may cause great problems, where the differences between espoused and actual values may create frustrations, lack of morale and inefficiency. Toyota, when they ventured in to U.S. is when conflict in culture start to appear. Japanese corporate culture often conflicts with American management styles is partially due to a basic underlying assumption of Japanese culture. * Japanese Corporate Decision-Making involves group where Americans make decisions as individuals. * Japanese management is much more focused on relationships with their employees than rules to ensure corporate goals are met. * Managers in Japan depend on the honor system to get work done, relying on their workers’ trust and good will * The traditional structures and the hierarchy maintained by Toyota * Functional managers acting as mentors to other staff to understand the values and the culture of the organization * Chief engineers played a vital role in the organization’ * All employees of all levels were treated as knowledge workers * Encouraged all employees to communicate in simple language and encouraged them to be a part of different clubs & groups to share ideas amongst them. * Personal relationships were valued on a higher level 3.5.4. Toyota’s culture according to Charles Handy Theory Charles Handy gave a classification to the organizations culture into range of four cultures. The four cultures he discusses are Power’, ‘Role’, ‘Task’ and ‘People’. Power Culture – Power is concentrated in a smaller group. Power radiates out from the centre, usually a key personality, to others in the organization who send information down to other departments, functions or units. After the Toyota Company had established after global expansion over different continents, the main decision making power was still with headquarters which reflects the control was centralized to Japan headquarters. Role Culture – This culture comprises with several functional units of the organization which have to implement the decisions. The strength of the culture lays in specialization within its theses functional units. Interaction takes place between the functional specialism by job descriptions, procedures, rules and systems. Toyota showed lot of signs of role culture. During the Manufacturing process, they got the Engineers, suppliers and all the other related parties involved from the designing part to the sale of vehicle. Also they treated all employees as equal and each employee were given the opportunity to give their suggestions or express their feelings. Also Toyota had separate divisions operating for separate functions such as Sales, Finance, Legal, Manufacturing and R&D. Task Culture – Such cultures are of organizations which are much involved in R&D activities. They will create temporary task teams to meet their future needs. Information and expertise are the skills that are of value here. In Toyota it was not much shown this type of culture but since Toyota were very aggressive in intensive R&D activities and they emphasized the fact that engineers to spend more time on core engineering and technical skill acquisition, it shows a little bit of task culture in existence in Toyota. 3. Corporate Climate 4.5. What is corporate climate? Climate is defined as the recurring patterns of behavior, attitudes and feelings that characterize life in the organization. Climate impacts employee attitudes and motivation which directly impact on business performances. 4.6. Was the climate correct in Toyota The corporate climate in Toyota was set right at the beginning and lost its way when the expansion process was taking place. As we all know, Toyota has been valued as an organization which been driven through its values, processors and philosophies. Their main focuses were initially on understanding the requirements of the users through intense R&D activities and fulfill the same while maintaining high level of quality. For such they had developed mechanisms such as TPS or Toyota way. The Toyota Way was invented, discovered, and developed over decades as talented Toyota managers and engineers, learned to cope with its (Toyota’s) problems of external adaptation and internal integration. Managers understand the challenges and context that led to active on-the-floor problem solving, not theoretical, top-down exercises. Communications were very strong amongst the functions units. With the rapid expansion and the globalized diversifications being carried out (more broadly in USA), Toyota turned in to an ambition driven company that ignored its traditions. The practice of conveying the Toyota way to an alien culture was an uphill task and a costly exercise. Also there were signs that the top level of the company had its own issues. 1995 when Okuda became the President, he made some dramatic changes to the long lived traditions of Toyota culture by cutting costs, increasing focus on product development and revamping of the product designs. Under his leadership, Toyota went on massive overseas expansion in a rapid phase but the cultural development and the processed values were not conveyed in the same phase. Once the expansions were set the focus/objective of the company became to be the largest car making company in the world beating GM. They were obsessed with this new vision. In parallel to this new vision somewhere in early 2000s, they launched the CCC21 cost cutting program. Due to such many of Japan employees were reduced from overseas plants and due to such the transferring of age old quality practices and corporate philosophy couldn’t be done to its subsidiaries. And finally due to new vision of being the largest car maker, more of production was focused than quality and Toyota looked for suppliers who could produce parts at a lower cost. Due to cultural change and knowledge gap between suppliers and Toyota, series of downfall in quality was observed later in Aug 2009. 4.7. Areas which went wrong with culture As per the case study it is evident that the two countries naturally have different cultures and they will impact the new venture which has cross cultural dynamics. In Toyota culture, they were very concerned on the values and the processor and the people involved. It’s much towards the Japanese cultural influence. But with the expansions, such practices were not effectively transferred to the employees of USA where they were part of a different culture. Even though they set up different division set up in different parts of USA all the main decisions were taken from the headquarters which was in Japan. The overseas divisions were not given much authority. Also another facture was that in Japanese culture they need lot of paperwork to take a decision where in USA culture they take quick decisions. Due to such several crucial decisions could not be taken on time leading to losses and at times up to legal penalties. And the Rigid structures and the Hierarchy were not helping the operations or were not letting the company grow towards the future. As the decision making was solely with the headquarters, it did not empower or give an opportunity to the managers in the USA offices as they were to follow set orders or tasks. 4. Suggestion for way forward 5.8. How could Toyota do better in the future When managing cross cultural issues, it is important that both parties spend a considerable amount of time on understanding each other’s cultures. It is very important that while the top level managers concentrate on the new diversification, the product lines and the bottom lines, they should strategize on how to manage the cultural issues as well. Toyota could have send the senior managers to USA prior to the expansions to really understand the culture of USA and same way they could have brought in the senior managers who were to be recruited from USA to Japan so they could have an deeper understanding of their corporate culture and the values. Same way they must be flexible on the structures and the Higher achy of the company by empowering the other unit heads to take decision and to be innovative from their end and back them on their decisions. Instead of adopting a culture where rewards are given on growth or production, it could be a combination of such and encouragement workers to perform better in order to collectively improve the company. More relationships could have been built with the suppliers and the dealers in order to maximize the production output and to develop the exact required features. The workforce in the USA plants to have a combination of Japanese and USA employees even at the senior levels. This way the touch of the original Toyota values and philosophies will not die fast and could be incarnated to the other employees as well. While trying to be the leader in automobile market, its not advisable to use only the cost leadership. It’s shown in the case study and in many other articles which done by industry experts that due to severe cost reduction practices, Toyota lost its core value which is Quality on its product. Hence it’s always good to have a mixture of strategies when conquering a market. Another aspect is Quick decision making. It is very important that when workings with a culture like USA who are keen on quick decision making, Japanese should react fact to situation otherwise will be at the risk of obtaining losses. R&D activities must be focused on the correct path as such practices will define the future of the company. If the R& D was done properly at Toyota they wouldn’t have acquired so many losses through recalls and poor product designs. And the sharing of information is a definite need when dealing with cross cultural matters. Since both cultural parties are new to each other such communications would bridge the gap. 5.9. Measures which they could take to effectively embed the proper culture to its employees As mentioned earlier, studying the involved cultures is an important process in any organization. For an example, the company which I work for (which is a leading Optical service provider in the country), they closely monitor the culture of the suburb or the region which they think of expanding before taking any key decisions. Same way, Japanese senior managers could have stayed in USA for long enough to get a grasp of their culture and understand their values and way of doing things. Understand the culture of the market which you are entering is a key strategy. Secondly they could have brought in the USA managers whom were to take up senior position in USA plants much prior to the installation of the factories as n induction programme or as an apprentice programme so that the Japanese managers could really transfer the cultural aspects and the values of Toyota which has been practiced for the past decades successfully. In my organization we do such practices as we recruit employees from the region where we are planning to expand to and place them at out head office so that they will be well trained and would really understand our values. Similarly, we send one of our senior staff or Managers to the newly opened branch once its stetted up to be there for a certain period so that he will be an mentor to the others and also he will bring in the details of the prevailing culture of the said region. Another thing Toyota must do is to empower the Managers from the said culture so that the decision making and other practices would be much more effective and related to the actual requirement. For this I could again take my company where all the branches are operated as separate profit centers and the Branch manager is empowered to take decision on behalf of the organization on many operational and at times on some strategic matters. Also for the employees of the two cultures to have much closer ties, Toyota could use the prevailing technologies such as social networking sites whereby they could get the employees of two cultures to meet up on a virtual world and get to know better and even to share ideas amongst them. This way the belongingness and the team work will develop amongst the employees. In our organization, we organize staff day outing, workshops, outward bound training programmes and other get-to-gather activities whereby they will get to know each other better and share their ideas amongst them. 5. Conclusion As most of the solutions are given in the previous paragraphs, the following points to be considered when managing cross cultural issues. When applied to cross cultural management of organizations different corporate cultures can be identified and proactive solutions must be developed to ensure compatibility between all parties and its cultures. And each culture must be valued as they are similarly valuable to both parties. When recruiting new employees it is very important to mentor them about the prevailing corporate culture and the values attached to these cultures. Train and socialize current employees to be more receivable for the coming alien cultures. Change and be flexible on organizational structure to give employees more control. Empower employees to make decision about their jobs. The long lived traditions and the best practices should not be neglected at any time and more importantly the culture plays a very vital role on the organizations success. 6. Reference http://geert-hofstede.com ICBT Study materials www.lindsay-shervin.co.ul www.changingminds.org www.businessmate.org

Geography of Time Essay

The sixth chapter â€Å"Where is Life Faster† discusses differences between life tempos in different cultures trying to reveal in what culture life is the fastest. The author writes that it is very interesting for him to compare one culture to another because a lot of unknown facts contributing development of psychological studies will be identified. Authors’ comparison focuses on time and speed of life. Cultural tempo is argued to affect the quality of human life. Nevertheless, it may be tricky to compare different cultures because labeling individuals should have scientific or psychological basis. It is necessary to go beyond the boundaries in order to measure the tempo of life with accuracy and objectivity. The author finds it interesting to compare indicator of speed in working office in different countries. However, the research has failed as the author needed to find observable jobs and workers should be residents of particular country. Research at gas station has failed as well, because such businesses are not equivalent across countries. According to author’s research, the fastest countries are Japan and Western European countries. Western Europe has nine fastest countries and Japan is the only Asian countries with life tempo. The fist place in West Europe is given to Switzerland, whereas the second is given to Ireland. Ireland is characterized by the fastest walking speed, whereas Switzerland is characterized by the splendid findings. Surprisingly, New York hasn’t gained the highest scores as some workers in office move very slowly. In contrast, the slowest speed of life is observed in non-industrialized countries and it is the countries of Asia, Africa and Latin America. The slowest is claimed to be Brazil, Indonesia and Mexico. Daily life in these countries is very slow and Brazilians, for example, â€Å"not only expected the casual approach of life, but had abandoned any semblance to of fidelity to the clock†. (p. 136) In such a way the author shows that there are many ways to measure speed of life and the results shows that different cultures has their own life tempo. Further, the author compares Japan, the USA and Western Europe to identify which of the countries is La Dolce Vita. Much of European countries are characterized by opportunities to relax and in the pleasures of good life. Therefore, Levine suggests that La dolce Vita is easier for Europeans rather than for Asians and Africans. For example, Japanese work harder and have less time for relaxing. Europeans are claimed to live better than Americans. La Dolce Vita is welcomed in Italy as they try to balance hard work and leisure. It is necessary to underline that working week is longer in the USA than in most of European countries. However, Japan is characterized by the longest working week. For more than half a century the working week hasn’t been changed and it is argued that time for leisure is decreasing in the country and the nation has less time for themselves. In contrast, in Europe the tendency to work has been replaced by the tendency to relax. For example, without leisure workers in France are more irritating and nervous. Therefore, Western Europeans have more vacation time. For example, in France â€Å"workers by law receive at least five weeks and often six weeks of paid vacation†. (p. 143) When comparing countries examples of cultural differences are seen the most. However, speed of life varies across cities and regions of one country. It is true for the USA as well as the country is very large and each state has its own traditions and customs. The slightest geographical shifts are profound and for example, moving from Oklahoma to Texas is viewed as â€Å"entering France, say, out of Switzerland†. (p. 146) The author is willing to reveal whether there are differences between New York and other large cities. Research results demonstrate that Northeastern United States is viewed as fast-spaced, whereas Californians are more relaxed. Boston and New York are the fastest cities in the country, whereas Los-Angeles is claimed to be one of the slowest in the country. One of the biggest challenges was to measure accurately walking speed as in some regions it was hard to find any walkers at all.

Thursday, August 29, 2019

Research Project Proposal on Emerging Technologies

Project on Emerging Technologies - Research Proposal Example To begin with, tutors want to be able to collaborate with their as well as colleagues using a means that is relatively cheap or free since educators spend a lot of their own money on numerous resources (Wylie, 2012). Twiddla provides software based on the internet that has free access and this is perfect for any meeting that does not need privacy or the need to login at a later time to look at saved meetings. The platform also has a set of math symbols that can be embedded on the whiteboard spaces being used by a teacher or students (Bernard, 2011). This is important since it is difficult to conduct a math discussion without the symbols required and many sites do not integrate these symbols in their boards. Collaborating using Twiddla simply necessitates a computer, a browser and a link to the internet so that the students and teachers do have to download software which is very helpful and all the host is required to do is start a meeting after which he or she shares the URL provided with the others and the meeting can go on. Apart from the invite that is hassle-free, all the tools are easy to use and need minimal explanation making them practical for the K-12 classroom. The site allows the students to explore each tool without worrying about ruining anything or making mistakes since it integrates an erase tool or the option of starting on a new sheet. One characteristic of Twiddla that makes it more conducive to progressive learning methods is that gives all the users an ability to make on the whiteboard in an easy way. Typically, in the case of an in-person classroom, only the teacher marks on the whiteboard and this cannot be blamed on the teaching philosophy but rather has more to do with the logical constraints of having many students moving up and down in the classroom and standing in front of the whiteboard so that they

Wednesday, August 28, 2019

Ecological Modernization Essay Example | Topics and Well Written Essays - 2500 words

Ecological Modernization - Essay Example It is a new approach that makes the society become more concerned with the environment issues that are affecting it. It is a new approach that is aimed at making the world have a new outlook at how it can integrate the modern technology not only in meeting g its n needs but also in meeting the needs of the environment as well. This new approach is taking place in several spheres including the changing role of science and technology, increasing important of marketing dynamics and various economical agents, the transformation in the role of the nation and state, modification in the new position and ideology of social movement in the society, and the changing discursive practices with integration to the new emerging ideologies. (Fisher and Freudenburg, 2001) There are many ways in which the concept of ecological modernization can be applied in the modern world and as far as we deal with the environment and the need to advance in technology. There are many supporters of the concept who have argued that the rationale behind the ecological Modernization is the need to develop in all aspects of the life. In this regard, the society needs to develop economically and socially and at the same time take care of the environment. What have been happenings in the world has acted as a wake up call to the whole world on the need to be conscious to the needs of the environment. In this regard, there is need for the world to become more focused on the changes that are taking place in the environment. The theory of ecological modernization assume that there are some way in which the world cause the modern technological in order to help to reduce consumption of the resource and at the same time increase efficiency in the use of resources. In this regard it calls for a change in the production process to be focused on reducing wastage of resource using some means like waste recycling and others. This has been one of the positions of the industrial ecology which has taken the concept of using the raw material sparingly in order to enhance sustainable development. (Cahill, 2002) Therefore one of the most important aspects of this theory is that the support needed to have sustainable development. As defined by the United Nations, sustainable development is the development that meets the needs of the present generation but in a way that it does not compromise the ability of the future generations to meet their needs as well. Therefore it postulates that we are guardians of the world for the future generation. In this regard we have to use the resources that we have in a sparingly manner so that we can enhance the ability of the future generation to meet their needs from the same resource. Ecological modernization calls for the use of modern technology in a way that it will help use to use our resources and at the same time help the future generation to use the same recourses. One of the strength in the development of the theory has been the support that the theory has been receiving from the civil societies. In this regard the rise of the civil society has been one of the most important factors that has enable the growth for the theory. This has been due to that fact that the civil

Tuesday, August 27, 2019

Works of Jacques Louis David and Damier Essay Example | Topics and Well Written Essays - 500 words

Works of Jacques Louis David and Damier - Essay Example During the period of romanticism, the painter worked quickly, freer and looser brush strokes giving evidence of the process of artistic creation. Another important aspect of romanticism was an interest in social issues, leading to a larger participation and concern in the events of the time. This is seen in the works of Eugene Delacroix, as in his Moorish scenes of men and wild beasts in physical conflict. He cultivated surface texture, impasto and used a rich palette of colors. Delacroix also pursued the same theme in his Jacob Wrestling with an Angel and in his North African paintings of turbaned men battling with tigers. Delacroix, however, is known best for his Liberty Guiding People, a patriotic painting of the French Revolution, in which the central figure of a woman beckons the soldiers forward with the flag she raises high above the field of the dead and wounded, while the drummer boy beside her valiantly charges with a pistol upraised. These two figures which form strong vig orous diagonals stand out amidst the smoke and confusion of the battle. An important realist is Honore Damier, whose rare gift for social satire found expression in his prints, political cartoons and paintings. While he lashed out at the corruption and hypocrisy of the privileged class, as in The Legislature, he had a profound sympathy for the poor and the oppressed, as in The Third Class Carriage and The Washer woman. Daumier had a sense of the dramatic moment revealed in a single look or gesture.

Monday, August 26, 2019

The proposal for an annotated bibliography Research

The for an annotated bibliography - Research Proposal Example Zhu, Jieming, â€Å"Local growth coalition: the context and implications of China’s gradualist urban land reforms,† International Journal of Urban and Regional Research, Vol. 23, No. 3 (1999): 534−548 In the above journal, Zhu discusses the process and end results of changes in China’s land tenure system. This paper will be employed in the analysis the Policy implication of China’s urbanization. The paper above discusses of the ways in which the China is adapting to the fast urbanization process. It focuses on the land economic behind reproduction of spaces. This will be used to analyze ways in which China has mitigated effects of scarcity of land amid urbanization. The above authors examine the importance of research in determining public administration of Cities undergoing urbanization such as those in China. This paper will be used to identify forms of governance which China has employed to embrace changes associated with urbanization. Song Yan, and Chengri Ding analyze factors associated with fast growth and development of Chinese cities. This paper will be of vital use in determining how problems associated with urbanization can be minimized while its benefits are internalized. The Novel above describes the changes which China has been generally experiencing in its growth and urbanization phase. The novel will be used to aid in practical understanding of the direct cases and impacts associated with urbanization in China. The above topic will serve well in appreciating Urban Geography concepts as they appear in the real world. China is one of the best countries to study since it is currently one of the fastest growing economies in the world. This led to rapid urbanization and

Sunday, August 25, 2019

Exploring the Brain Responses Assignment Example | Topics and Well Written Essays - 2750 words

Exploring the Brain Responses - Assignment Example The experimental study has the hypothesis of the inducement of either the rTMS or the stimulants triggering the dopamine production. In the two experiments, the brain is induced with rTMS for patients with depression and stimulants for the c-Fos experiment, which uses rats. Procedure outline of rTMS Eight Patients with depression was treated with rTMS, over the left prefrontal cortex on a daily basis. Each of them underwent neuropsychological test scores and PET scan before and after the rTMS treatment (Goldman et al. 1978). Procedure outline of c-Fos experiment Six rats was injected with cocaine and six with amphetamine. The rats were then killed, and the brain extracted. The brain was then preserved and treated with antibodies that recognize the c-Fos-positive cells. A special dye was then added to reveal the location of the c-Fos cells. The cells are counted easily since they are brown due to the dye. Q1a. The independent variable (IV), the conditions and the two dependent variabl es (DV) for this study First, the TMS experiment will be considered. The independent variable (IV) is the raclopride binding. This does not rely on the other variable, but it is rather depended on by the other variable. The conditions of the experiment rTMS are repetitive Transcranial Magnetic Stimulation. There are four dependent variables that depend on the set conditions, which are R caudate, Laudate, R putamen, and L putamen. These will vary depending on the rTMS induced on the patient. In the second experiment (c-Fos), the Independent variable is c-positive cells, which are not necessarily dependant on the other variable set. The conditions are cocaine and the amphetamine. The independent variables are the nucleus accumbens from the sections of the rat’s brain (Goldman et al. 1978). Q1b. The study within a participant study explanation The study of rTMS involves the participation of several patients who are observed before and after the rTMS induction. A PET scanning is then done to establish the number of functional dopamine receptors using radioactive raclopride. The study is thus within a participant study where the data obtained is from the patient pairs under experiment. The participants involved are patients who are 8 rather than being a single patient. Since 2 pairs of participants had the same pre-rTMS test scores represented by a single point for each pair, the study is within a participant study. The c-Fos experiment study is also within a participant study. This is because the study experiment involved the participation of every group member. This means that obtaining the data of all the group members was essential. Q1c. The vital piece of statistical information missing from the study results obtained The essential piece of statistical information missing from the results presented here is a hypothesis. This is a vital tool in analyzing the data presented to either agree with the data or disagree. This tool would be critical in making c onclusions, like establishing the level of deviation from the expected results set on the hypothesis. This piece of information set as a hypothesis would act as the researchers’ guideline when they are setting the procedures. This tool is also helpful to a researcher in the field since it will define his research scope.

Saturday, August 24, 2019

News Article Assignment Example | Topics and Well Written Essays - 250 words - 2

News Article - Assignment Example very essential in determining the STD that one may be infected with as both viral and bacterial STDs have different treatment methods and some have may have no symptoms, but attack when it is too late. The issue on STD transmission, treatment, and prevention is covered in the biology concepts and connections chapter 27.7. As the article notes, a part from the viral and bacterial STDs, fungi and other organisms can csues some STDs. The article suggests that knowing the cause of the STD would make it much easier to treat, with the most common STD cause being bacteria that affects over 90 million people globally. The article reports that one advantage of bacterial STDs is that they are curable, comparing the viral STDs such as AIDs that has defied any treatment methods. Gonorrhea and syphilis are examples of bacteria transmitted STDs. Therefore, knowing the actual cause of the STD makes it much easier to and prevents cases of misdiagnoses. Lack of enough knowledge about these diseases may be fatal and lead to complications in late stage of some such as syphilis that may attack that nervous system. The article has some scientific facts. The article assesses the main pathogens of many STDs and tries to differentiate them through elaborating on the nature of STDs. Similarly, the article calls for care when dealing with STDs; some are contagious and finally stresses on the need to go for necessary test to determine the actual STD in question. Understanding STDs and the Importance of Regular Testing, Mod to Modern, 24March, 2013 http://www.modtomodern.com/understanding-stds-and-the-importance-of-regular-testing/ (accessed, 12th April,

Friday, August 23, 2019

Film Westworld Essay Example | Topics and Well Written Essays - 500 words

Film Westworld - Essay Example One of the most striking characters is the android called as the Gunslinger (Yul Brynner) who portrays a robotic Old West sheriff that kills humans. As Benjamin dies with the bite of the android snake, Brolin faces the out of controlled Gunslinger. Benjamin is lucky since the gun slinging android shuts down at the end as some of its circuit splits. It is remarkable that with the heights of the technology human has; he might have the idea of bringing real life interaction between humans and machine. This is very evident in the movie the West World (1973). We can see that technological enhancement really works to pursue better living and at the same time for recreation sake. The highlight is that technology is really promising in different aspects of life. Many inventions and discoveries of man produced positive impact, however it does not give the guarantee that technology itself will be without flaw. Part of the expectation with respect to technology is that it has the possibility to be out of control. This is what exactly happened to the movie West World. As the operators of the resort admires their wonderful work about robots providing services to human, they were just shocked seeing that everything is turning into a catastrophe. That is putting too much faith in the technology, without realizing that the same things people enjoyed out of it could be in the future the same things that put one’s life in a danger. Interaction between human and machines is an extremely exciting idea, but the film West World suggest how men can be drunk with the heights of human technology to do something that will totally violate his human morale. You can just imagine how the place suggest that anyone who chooses to stay in that place for a vacation might have experience the fantasy of their own as having sex and killings. Technology in some means is good, however the movie West World created a role playing world wherein people can do whatever they want (killings

Thursday, August 22, 2019

HIV and the latino community in the U.S Essay Example | Topics and Well Written Essays - 1000 words

HIV and the latino community in the U.S - Essay Example In the United States, 1.2 million people are living with HIV. African Americans have the highest prevalence of HIV by race, amounting to 45% of those infected. Latin Americans only constitute 22% of those infected with HIV. Among Latin Americans, 19% of HIV cases are attributed to heterosexual contact. The rate of infection of HIV/AIDS among Latin Americans is second only to the rate of the African Americans. They are 3.5 times higher than non-Hispanic White Americans (Centers for Disease Control and Prevention, 2012). Human Immunodeficiency Virus is a disease that attacks various organ systems of the body and weakens its ability to protect itself from infection. The last stage of this disease is AIDS or Acquired Immune Deficiency Syndrome (Krapp, 2002). Aside from the physical aspects of the disease, HIV/AIDS can also affect the mental health of the individual. This disease can cause emotional distress, anxiety, and most dangerously, depression. This may stem from various sources su ch as the stigma associated with the disease, or it can occur if the infection ever reaches the brain of the individual. Considering that the psychiatric and psychological side effects of the disease are connected to social stigma, especially due to its nature as a sexually transmitted disease common among the sexually promiscuous or those who engage in homosexual contact. There is a social aspect to preventing this disease aside from providing care to those who have it. A study from Zea, Reisen, Poppen, Bianchi, and Echeverry (2005) examined how Latinos who tell people close to them about their HIV status helps in their mental health during the disease. It shows that telling trustworthy people about their disease helped them get the social support that they would need to get through the disease without losing their self-esteem or lapsing into depression. On the other hand, a study by Gerbi, Habtemariam, Tameru, Nganwa, Robnet, and Bowie (n. d.) talks about how psychosocial factors can affect into resorting to substance abuse and other HIV/AIDS risky behavior. It’s a harsh circle that feeds upon itself. Risky behavior increases the chances to contract HIV/AIDS. When they get HIV, they have to handle the stigma of the disease. They get pressured from not telling other people about the disease, and they get stressed by how people’s treatment of them changes from knowing about their status. This situation gives them psychological stress, which might lead them acting on more risky behavior such as substance abuse or depression, which might increase their susceptibility to other illnesses which the body cannot defend itself due to the compromised immune system. HIV is not just a systemic infection of an individual; it also affects the person’s life, his psyche, and the people around him. Giving him medicine to manage the illness is not enough. They need help to face the emotional demands of the disease like stress, anger, grief, helplessness, d epression, and even cognitive disorders if the disease reaches the brain. Aside from an immunologist, it would be wise to also consider seeing a psychiatrist; it can help handle the mental aspect of the illness (American Psychiatric Association, 2006). For the case of Latinos, the risk for HIV is framed by their ethnic and racial minority status. This also connects to their socioeconomic status. These factors, plus gender, sexual orientation, and stigma increase their vulnerability to HIV/AIDS. According to a

Education Is the Key to a Good and Successful Life Essay Example for Free

Education Is the Key to a Good and Successful Life Essay Getting a good education is one of the foundations of living a good life. Yes of course morals, family, and religion are huge parts of your life, but without a good education youll have a hard time going anywhere but down in this world Im afraid. The word education is misrepresented often though, because it does not always mean reading lots of books and writing tons of papers to get a good grade. No education is learning how to do things the right way at its most basic level, and when you do things right in life you become successful without a doubt. Sure there are those people out there that dont need their high school diplomas, or didnt need college, and thats just great! Theyve been naturally blessed with strong minds and good skills to provide for them in life. But as with most people, life is a learning process and school helps you organize the early years of that process so you can become as efficient and successful as early in life as possible. So if youre thinking about dropping out of high school, then you need to think twice! High school definitely isnt the most exciting place to spend your youth years, thats a fact! But none the less its still important to your life on a huge scale. High school provides you with a general range of knowledge that can be applied through out your entire lifetime. Then in high school you get the chance to branch out a little more and experience a little bit of everything. This helps you to decided maybe what you want to do when you finally do finish high school and begin the next step in your life. After high school though the possibilities grow so much greater for you. College is the general destination for most high school graduates, but it definitely isnt for everyone! Some people will continue to college, pick a major, graduate, and become successful in their field for the rest of their life, and thats just wonderful! But for others, that may not be the best route to take. A lot of people join the military, because its a good life style to live. The military provides a structured way of life as well as a strong paycheck to live by. Army personnel, whether they be officers or ensigns, never go to bed hungry or cold because what they cant provide for themselves the military will provide for them. Granted this does come at a huge cost. You will serve out the time you signed up for, no excuses, as well as you put forth 100% of yourself (including your very own life) to be in our military. Its a good life style to live, but it also has its price. Also another choice of action would be a trade school to become an electrician, plumber, or other manual labour type of job. These jobs though, compared to just mowing lawns, give you a good education in the field and you can earn huge amounts of money because of all the technical knowledge and skill required. This way of living is definitely not for those who dont want to get dirty, or for those who are lazy because it is not an easy way to go! Many people also just live off their own cleverness. They sell products or invent nifty tools that people buy and use. These are special kind of people because they work much harder than the others. These kind of people are the ones who started with almost nothing and become millionaires. They didnt inherit the money, all they did was use the mind in their head to get them ahead in life. So before you blow off education, just remember these things! If you think dropping out of high school is for you then think again because just finish high school and the possibilities open up before you. The military, trade schools, your own cleverness, or even just college are great choices after high school and they all will provide you with the skills and tools needed to become successful in life. So dont brush off education as a waste of time just because its not the most fun thing to do, because even though its not fun its still hugely important to your life. Remember that, just remember that.

Wednesday, August 21, 2019

Implementation of New Computer Network

Implementation of New Computer Network Here we are going to implement an new computer network for this company that 25 employees have been working in. Suppose you want to build a computer network, one that has potential to grow to global proportions to support applications as diverse as teleconferencing, video-on-demand, electronic commerce, distributed computing, and digital libraries. What available technologies would serve as the underlying building blocks, and what kind of software architecture would you design t integrate these building blocks into an effective communication service? Suppose you want to build a computer network, one that has the potential togrow to global proportions and to support applications as diverse as teleconferencing, video-on-demand, electronic commerce, distributed computing, and digital libraries. What available technologies would serve as the underlying building blocks, and what kind of software architecture would you design to integrate these building blocks into an effective communication service? Answering this question is the overriding goal of — to describe the available building materials and then to show how they can be used to construct a network from the ground up. Before we can understand how to design a computer network, we should first agree on exactly what a computer network is. At one time, the term network meant the set of serial lines used to attach dumb terminals to mainframe computers. To some, the term implies the voice telephone network. To others, the only interesting network is the cable network used to disseminate video signals. The main thing these networks have in common is that they are specialized to handle one particular kind of data (keystrokes, voice, or video) and they typically connect to special-purpose devices (terminals, hand receivers, and television sets). What distinguishes a computer network from these other types of networks? Probably the most important characteristic of a computer network is its generality. Computer networks are built primarily from general-purpose programmable hardware, and they are not optimized for a particular application like making phone calls or delivering television signals. Instead, they are able to carry many different types of data, and they support a wide, and ever-growing, range of applications. This chapter looks at some typical applications of computer networks and discusses the requirements that a network designer who wishes to support such applications must be aware of. Once we understand the requirements, how do we proceed? Fortunately, we will not be building the first network. Others, most notably the community of researchers responsible for the Internet, have gone before us. We will use the wealth of experience generated from the Internet to guide our design. This experience is embodied in a network architecture that identifies the available hardware and software components and shows how they can be arranged to form a complete network system. To start us on the road toward understanding how to build a network, this chapter does four things. First, it explores the requirements that different applications and different communities of people (such as network users and network operators) place on the network. Second, it introduces the idea of a network architecture, which lays the foundation for the rest of the book. Third, it introduces some of the key elements in the implementation of computer networks. Finally, it identifies the key metrics that are used to evaluate the performance of computer networks. 1.1 APPLICATIONS Most people know the Internet through its applications: the World Wide Web, email, streaming audio and video, chat rooms, and music (file) sharing. The Web, for example, presents an intuitively simple interface. Users view pages full of textual and graphical objects, click on objects that they want to learn more about, and a corresponding new page appears. Most people are also aware that just under the covers, each selectable object on a page is bound to an identifier for the next page to be viewed. This identifier, called a Uniform Resource Locator (URL), is used to provide a way of identifying all the possible pages that can be viewed from your web browser. For example, http://www.cs.princeton.edu/~llp/index.html is the URL for a page providing information about one of this books authors: the string http indicates that the HyperText Transfer Protocol (HTTP) should be used to download the page, www.cs.princeton.edu is the name of the machine that serves the page, and /~llp/index.html uniquely identifies Larrys home page at this site. What most Web users are not aware of, however, is that by clicking on just one such URL, as many as 17 messages may be exchanged over the Internet, and this assumes the page itself is small enough to fit in a single message. This number includes up to six messages to translate the server name (www.cs.princeton.edu) into its Internet address (128.112.136.35), three messages to set up a Transmission Control Protocol (TCP) connection between your browser and this server, four messages for your browser to send the HTTP get request and the server to respond with the requested page (and for each side to acknowledge receipt of that message), and four messages to tear down the TCP connection. Of course, this does not include the millions of messages exchanged by Internet nodes throughout the day, just to let each other know that they exist and are ready to serve web pages, translate names to addresses, and forward messages toward their ultim ate destination. Another widespread application of the Internet is the delivery of streaming audio and video. While an entire video file could first be fetched from a remote machine and then played on the local machine, similar to the process of downloading and displaying a web page, this would entail waiting for the last second of the video file to be delivered before starting to look at it. Streaming video implies that the sender and the receiver are, respectively, the source and the sink for the video stream. That is, the source generates a video stream (perhaps using a video capture card), sends it across the Internet in messages, and the sink displays the stream as it arrives. There are a variety of different classes of video applications. One class of video application is video-on-demand, which reads a pre-existing movie from disk and transmits it over the network. Another kind of application is videoconferencing, which is in some ways the more challenging (and, for networking people, interesting) case because it has very tight timing constraints. Just as when using the telephone, the interactions among the participants must be timely. When a person at one end gestures, then that action must be displayed at the other end as quickly as possible. Too much delay makes the system unusable. Contrast this with video-on-demand where, if it takes several seconds from the time the user starts the video until the first image is displayed, the service is still deemed satisfactory. Also, interactive video usually implies that video is flowing in both directions, while a video-on-demand application is most likely sending video in only one direction. One pioneering example of a videoconferencing tool, developed in the early and mid-1990s, is vic. shows the control panel for a vic session. vic is actually one of a suite of conferencing tools designed at Lawrence Berkeley Laboratory and UC Berkeley. The others include a whiteboard application (wb) that allows users to send sketches and slides to each other, a visual audio tool called vat, and a session directory (sdr) that is used to create and advertise videoconferences. All these tools run on Unix—hence their lowercase names—and are freely available on the Internet. Many similar tools are available for other operating systems. It is interesting to note that while video over the Internet is still considered to be in its relative infancy at the time of this writing (2006), that the tools to support video over IP have existed for well over a decade. Although they are just two examples, downloading pages from the Web and participating in a videoconference demonstrate the diversity of applications that can be built on top of the Internet, and hint at the complexity of the Internets design. Starting from the beginning, and addressing one problem at time, the rest of this book explains how to build a network that supports such a wide range of applications. Chapter 9 concludes the book by revisiting these two specific applications, as well as several others that have become popular on todays Internet. 1.2 REQUIREMENTS We have just established an ambitious goal for ourselves: to understand how to build a computer network from the ground up. Our approach to accomplishing this goal will be to start from first principles, and then ask the kinds of questions we would naturally ask if building an actual network. At each step, we will use todays protocols to illustrate various design choices available to us, but we will not accept these existing artifacts as gospel. Instead, we will be asking (and answering) the question of why networks are designed the way they are. While it is tempting to settle for just understanding the way its done today, it is important to recognize the underlying concepts because networks are constantly changing as the technology evolves and new applications are invented. It is our experience that once you understand the fundamental ideas, any new protocol that you are confronted with will be relatively easy to digest. The first step is to identify the set of constraints and requirements that influence network design. Before getting started, however, it is important to understand that the expectations you have of a network depend on your perspective: An application programmer would list the services that his application needs, for example, a guarantee that each message the application sends will be delivered without error within a certain amount of time. A network designer would list the properties of a cost-effective design, for example, that network resources are efficiently utilized and fairly allocated to different users. A network provider would list the characteristics of a system that is easy to administer and manage, for example, in which faults can be easily isolated and whereitiseasytoaccountfor usage. This section attempts to distill these different perspectives into a high-level introduction to the major considerations that drive network design, and in doing so, identifies the challenges addressed throughout the rest of this book. 1.2.1 Connectivity Starting with the obvious, a network must provide connectivity among a set of computers. Sometimes it is enough to build a limited network that connects only a few select machines. In fact, for reasons of privacy and security, many private (corporate) networks have the explicit goal of limiting the set of machines that are connected. In contrast, other networks (of which the Internet is the prime example) are designed to grow in a way that allows them the potential to connect all the computers in the world. A system that is designed to support growth to an arbitrarily large size is said to scale. Using the Internet as a model, this book addresses the challenge of scalability. Links, Nodes, and Clouds Network connectivity occurs at many different levels. At the lowest level, a network can consist of two or more computers directly connected by some physical medium, such as a coaxial cable or an optical fiber. We call such a physical medium a link,and we often refer to the computers it connects as nodes. (Sometimes a node is a more specialized piece of hardware rather than a computer, but we overlook that distinction for the purposes of this discussion.) As illustrated in, physical links are sometimes limited to a pair of nodes (such a link is said to be point-to-point), while in other cases, more than two nodes may share a single physical link (such a link is said to be multiple-access). Whether a given link supports point-to-point or multiple-access connectivity depends on how the node is attached to the link. It is also the case that multiple-access links are often limited in size, in terms of both the geographical distance they can cover and the number of nodes they can connect. If computer networks were limited to situations in which all nodes are directly connected to each other over a common physical medium, then networks would either be very limited in the number of computers they could connect, or the number of wires coming out of the back of each node would quickly become both unmanageable and very expensive. Fortunately, connectivity between two nodes does not necessarily imply a direct physical connection between them—indirect connectivity may be achieved among a set of cooperating nodes. Consider the following two examples of how a collection of computers can be indirectly connected. shows a set of nodes, each of which is attached to one or more point- to-point links. Those nodes that are attached to at least two links run software that forwards data received on one link out on another. If organized in a systematic way, these forwarding nodes form a switched network. There are numerous types of switched networks, of which the two most common are circuit-switched and packet-switched. The former is most notably employed by the telephone system, while the latter is used for the overwhelming majority of computer networks and will be the focus of this book. The important feature of packet-switched networks is that the nodes in such a network send discrete blocks of data to each other. Think of these blocks of data as corresponding to some piece of application data such as a file, a piece of email, or an image. We call each block of data either a packet or a message, and for now we use these terms interchangeably; we discuss the reason they are not always the same in Section 1.2.2. Packet-switched networks typically use a strategy called store-and-forward. As the name suggests, each node in a store-and-forward network first receives a complete packet over some link, stores the packet in its internal memory, and then forwards the complete packet to the next node. In contrast, a circuit-switched network first establishes a dedicated circuit across a sequence of links and then allows the source node to send a stream of bits across this circuit to a destination node. The major reason for using packet switching rather than circuit switching in a computer network is efficiency, discussed in the next subsection. The cloud in distinguishes between the nodes on the inside that implement the network (they are commonly called switches, and their primary function is to store and forward packets) and the nodes on the outside of the cloud that use the network (they are commonly called hosts, and they support users and run application programs). Also note that the cloud in is one of the most important icons of computer networking. In general, we use a cloud to denote any type of network, whether it is a single point-to-point link, a multiple-access link, or a switched network. Thus, whenever you see a cloud used in a figure, you can think of it as a placeholder for any of the networking technologies covered in this book. A second way in which a set of computers can be indirectly connected is shown in . In this situation, a set of independent networks (clouds) are interconnected to form an internetwork, or internet for short. We adopt the Internets convention of referring to a generic internetwork of networks as a lowercase i internet, and the currently operational TCP/IP Internet as the capital I Internet. A node that is connected to two or more networks is commonly called a router or gateway, and it plays much the same role as a switch—it forwards messages from one network to another. Note that an internet can itself be viewed as another kind of network, which means that an internet can be built from an interconnection of internets. Thus, we can recursively build arbitrarily large networks by interconnecting clouds to form larger clouds. Just because a set of hosts are directly or indirectly connected to each other does not mean that we have succeeded in providing host-to-host connectivity. The final requirement is that each node must be able to state which of the other nodes on the network it wants to communicate with. This is done by assigning an address to each node. An address is a byte string that identifies a node; that is, the network can use a nodes address to distinguish it from the other nodes connected to the network. When a source node wants the network to deliver a message to a certain destination node, it specifies the address of the destination node. If the sending and receiving nodes are not directly connected, then the switches and routers of the network use this address to decide how to forward the message toward the destination. The process of determining systematically how to forward messages toward the destination node based on its address is called routing. This brief introduction to addressing and routing has presumed that the source node wants to send a message to a single destination node (unicast). While this is the most common scenario, it is also possible that the source node might want to broadcast a message to all the nodes on the network. Or a source node might want to send a message to some subset of the other nodes, but not all of them, a situation called multicast. Thus, in addition to node-specific addresses, another requirement of a network is that it supports multicast and broadcast addresses. The main idea to take away from this discussion is that we can define a network recursively as consisting of two or more nodes connected by a physical link, or as two or more networks connected by a node. In other words, a network can be constructed from a nesting of networks, where at the bottom level, the network is implemented by some physical medium. One of the key challenges in providing network connectivity is to define an address for each node that is reachable on the network (including support for broadcast and multicast connectivity), and to be able to use this address to route messages toward the appropriate destination node(s). 1.2.2 Cost-Effective Resource Sharing As stated above, this book focuses on packet-switched networks. This section explains the key requirement of computer networks—efficiency—that leads us to packet switching as the strategy of choice. Given a collection of nodes indirectly connected by a nesting of networks, it is possible for any pair of hosts to send messages to each other across a sequence of links and nodes. Of course, we want to do more than support just one pair of communicating hosts—we want to provide all pairs of hosts with the ability to exchange messages. The question, then, is how do all the hosts that want to communicate share the network, especially if they want to use it at the same time? And, as if that problem isnt hard enough, how do several hosts share the same link when they all want to use it at the same time? To understand how hosts share a network, we need to introduce a fundamental concept, multiplexing, which means that a system resource is shared among multiple users. At an intuitive level, multiplexing can be explained by analogy to a timesharing computer system, where a single physical CPU is shared (multiplexed) among multiple jobs, each of which believes it has its own private processor. Similarly, data being sent by multiple users can be multiplexed over the physical links that make up a network. To see how this might work, consider the simple network illustrated in , where the three hosts on the left side of the network (senders S1S3) are sending data to the three hosts on the right (receivers R1R3) by sharing a switched network that contains only one physical link. (For simplicity, assume that host S1 is sending data to host R1, and so on.) In this situation, three flows of data—corresponding to the three pairs of hosts—are multiplexed onto a single physical link by switch 1 and then demultiplexed back into separate flows by switch 2. Note that we are being intentionally vague about exactly what a flow of data corresponds to. For the purposes of this discussion, assume that each host on the left has a large supply of data that it wants to send to its counterpart on the right. There are several different methods for multiplexing multiple flows onto one physical link. One common method is synchronous time-division multiplexing (STDM). The idea of STDM is to divide time into equal-sized quanta and, in a round-robin fashion, give each flow a chance to send its data over the physical link. In other words, during time quantum 1, data from S1 to R1 is transmitted; during time quantum 2, data from S2 to R2 is transmitted; in quantum 3, S3 sends data to R3. At this point, the first flow (S1 to R1) gets to go again, and the process repeats. Another method is frequency-division multiplexing (FDM). The idea of FDM is to transmit each flow over the physical link at a different frequency, much the same way that the signals for different TV stations are transmitted at a different frequency on a physical cable TV link. Although simple to understand, both STDM and FDM are limited in two ways. First, if one of the flows (host pairs) does not have any data to send, its share of the physical link—that is, its time quantum or its frequency—remains idle, even if one of the other flows has data to transmit. For example, S3 had to wait its turn behind S1 and S2 in the previous paragraph, even if S1 and S2 had nothing to send. For computer communication, the amount of time that a link is idle can be very large—for example, consider the amount of time you spend reading a web page (leaving the link idle) compared to the time you spend fetching the page. Second, both STDM and FDM are limited to situations in which the maximum number of flows is fixed and known ahead of time. It is not practical to resize the quantum or to add additional quanta in the case of STDM or to add new frequencies in the case of FDM. The form of multiplexing that we make most use of in this book is called statistical multiplexing. Although the name is not all that helpful for understanding the concept, statistical multiplexing is really quite simple, with two key ideas. First, it is like STDM in that the physical link is shared over time—first data from one flow is transmitted over the physical link, then data from another flow is transmitted, and so on. Unlike STDM, however, data is transmitted from each flow on demand rather than during a predetermined time slot. Thus, if only one flow has data to send, it gets to transmit that data without waiting for its quantum to come around and thus without having to watch the quanta assigned to the other flows go by unused. It is this avoidance of idle time that gives packet switching its efficiency. As defined so far, however, statistical multiplexing has no mechanism to ensure that all the flows eventually get their turn to transmit over the physical link. That is, once a flow begins sending data, we need some way to limit the transmission, so that the other flows can have a turn. To account for this need, statistical multiplexing defines an upper bound on the size of the block of data that each flow is permitted to transmit at a given time. This limited-size block of data is typically referred to as a packet, to distinguish it from the arbitrarily large message that an application program might want to transmit. Because a packet-switched network limits the maximum size of packets, a host may not be able to send a complete message in one packet. The source may need to fragment the message into several packets, with the receiver reassembling the packets back into the original message. In other words, each flow sends a sequence of packets over the physical link, with a decision made on a packet-by-packet basis as to which flows packet to send next. Notice that if only one flow has data to send, then it can send a sequence of packets back-to-back. However, should more than one of the flows have data to send, then their packets are interleaved on the link. depicts a switch multiplexing packets from multiple sources onto a single shared link. The decision as to which packet to send next on a shared link can be made in a number of different ways. For example, in a network consisting of switches interconnected by links such as the one in the decision would be made by the switch that transmits packets onto the shared link. (As we will see later, not all packet-switched networks actually involve switches, and they may use other mechanisms to determine whose packet goes onto the link next.) Each switch in a packet-switched network makes this decision independently, on a packet-by-packet basis. One of the issues that faces a network designer is how to make this decision in a fair manner. For example, a switch could be designed to service packets on a first-in-first-out (FIFO) basis. Another approach would be to transmit the packets from each of the different flows that are currently sending data through the switch in a round-robin manner. This might be done to ensure that certain flows receive a particular share of the links b andwidth, or that they never have their packets delayed in the switch for more than a certain length of time. A network that attempts to allocate bandwidth to particular flows is sometimes said to support quality of service (QoS), a topic that we return to in Chapter 6. Also, notice in that since the switch has to multiplex three incoming packet streams onto one outgoing link, it is possible that the switch will receive packets faster than the shared link can accommodate. In this case, the switch is forced to buffer these packets in its memory. Should a switch receive packets faster than it can send them for an extended period of time, then the switch will eventually run out of buffer space, and some packets will have to be dropped. When a switch is operating in this state, it is said to be congested. The bottom line is that statistical multiplexing defines a cost-effective way for multiple users (e.g., host-to-host flows of data) to share network resources (links and nodes) in a fine-grained manner. It defines the packet as the granularity with which the links of the network are allocated to different flows, with each switch able to schedule the use of the physical links it is connected to on a per-packet basis. Fairly allocating link capacity to different flows and dealing with congestion when it occurs are the key challenges of statistical multiplexing. 1.2.3 Support for Common Services While the previous section outlined the challenges involved in providing costeffective connectivity among a group of hosts, it is overly simplistic to view a computer network as simply delivering packets among a collection of computers. It is more accurate to think of a network as providing the means for a set of application processes that are distributed over those computers to communicate. In other words, the next requirement of a computer network is that the application programs running on the hosts connected to the network must be able to communicate in a meaningful way. When two application programs need to communicate with each other, there are a lot of complicated things that need to happen beyond simply sending a message from one host to another. One option would be for application designers to build all that complicated functionality into each application program. However, since many applications need common services, it is much more logical to implement those common services once and then to let the application designer build the application using those services. The challenge for a network designer is to identify the right set of common services. The goal is to hide the complexity of the network from the application without overly constraining the application designer. Intuitively, we view the network as providing logical channels over which application-level processes can communicate with each other; each channel provides the set of services required by that application. In other words, just as we use a cloud to abstractly represent connectivity among a set of computers, we now think of a channel as connecting one process to another. shows a pair of application-level processes communicating over a logical channel that is, in turn, implemented on top of a cloud that connects a set of hosts. We can think of the channel as being like a pipe connecting two applications, so that a sending application can put data in one end and expect that data to be delivered by the network to the application at the other end of the pipe. Thechallengeistorecognize what functionality the channels should provide to application programs. For example, does the application require a guarantee that messages sent over the channel are delivered, or is it acceptable if some messages fail to arrive? Is it necessary that messages arrive at the recipient process in the same order in which they are sent, or does the recipient not care about the order in which messages arrive? Does the network need to ensure that no third parties are able to eavesdrop on the channel, or is privacy not a concern? In general, a network provides a variety of different types of channels, with each application selecting the type that best meets its needs. The rest of this section illustrates the thinking involved in defining useful channels. Identifying Common Communication Patterns Designing abstract channels involves first understanding the communication needs of a representative collection of applications, then extracting their common communication requirements, and finally incorporating the functionality that meets these requirements in the network. One of the earliest applications supported on any networ Implementation of New Computer Network Implementation of New Computer Network Here we are going to implement an new computer network for this company that 25 employees have been working in. Suppose you want to build a computer network, one that has potential to grow to global proportions to support applications as diverse as teleconferencing, video-on-demand, electronic commerce, distributed computing, and digital libraries. What available technologies would serve as the underlying building blocks, and what kind of software architecture would you design t integrate these building blocks into an effective communication service? Suppose you want to build a computer network, one that has the potential togrow to global proportions and to support applications as diverse as teleconferencing, video-on-demand, electronic commerce, distributed computing, and digital libraries. What available technologies would serve as the underlying building blocks, and what kind of software architecture would you design to integrate these building blocks into an effective communication service? Answering this question is the overriding goal of — to describe the available building materials and then to show how they can be used to construct a network from the ground up. Before we can understand how to design a computer network, we should first agree on exactly what a computer network is. At one time, the term network meant the set of serial lines used to attach dumb terminals to mainframe computers. To some, the term implies the voice telephone network. To others, the only interesting network is the cable network used to disseminate video signals. The main thing these networks have in common is that they are specialized to handle one particular kind of data (keystrokes, voice, or video) and they typically connect to special-purpose devices (terminals, hand receivers, and television sets). What distinguishes a computer network from these other types of networks? Probably the most important characteristic of a computer network is its generality. Computer networks are built primarily from general-purpose programmable hardware, and they are not optimized for a particular application like making phone calls or delivering television signals. Instead, they are able to carry many different types of data, and they support a wide, and ever-growing, range of applications. This chapter looks at some typical applications of computer networks and discusses the requirements that a network designer who wishes to support such applications must be aware of. Once we understand the requirements, how do we proceed? Fortunately, we will not be building the first network. Others, most notably the community of researchers responsible for the Internet, have gone before us. We will use the wealth of experience generated from the Internet to guide our design. This experience is embodied in a network architecture that identifies the available hardware and software components and shows how they can be arranged to form a complete network system. To start us on the road toward understanding how to build a network, this chapter does four things. First, it explores the requirements that different applications and different communities of people (such as network users and network operators) place on the network. Second, it introduces the idea of a network architecture, which lays the foundation for the rest of the book. Third, it introduces some of the key elements in the implementation of computer networks. Finally, it identifies the key metrics that are used to evaluate the performance of computer networks. 1.1 APPLICATIONS Most people know the Internet through its applications: the World Wide Web, email, streaming audio and video, chat rooms, and music (file) sharing. The Web, for example, presents an intuitively simple interface. Users view pages full of textual and graphical objects, click on objects that they want to learn more about, and a corresponding new page appears. Most people are also aware that just under the covers, each selectable object on a page is bound to an identifier for the next page to be viewed. This identifier, called a Uniform Resource Locator (URL), is used to provide a way of identifying all the possible pages that can be viewed from your web browser. For example, http://www.cs.princeton.edu/~llp/index.html is the URL for a page providing information about one of this books authors: the string http indicates that the HyperText Transfer Protocol (HTTP) should be used to download the page, www.cs.princeton.edu is the name of the machine that serves the page, and /~llp/index.html uniquely identifies Larrys home page at this site. What most Web users are not aware of, however, is that by clicking on just one such URL, as many as 17 messages may be exchanged over the Internet, and this assumes the page itself is small enough to fit in a single message. This number includes up to six messages to translate the server name (www.cs.princeton.edu) into its Internet address (128.112.136.35), three messages to set up a Transmission Control Protocol (TCP) connection between your browser and this server, four messages for your browser to send the HTTP get request and the server to respond with the requested page (and for each side to acknowledge receipt of that message), and four messages to tear down the TCP connection. Of course, this does not include the millions of messages exchanged by Internet nodes throughout the day, just to let each other know that they exist and are ready to serve web pages, translate names to addresses, and forward messages toward their ultim ate destination. Another widespread application of the Internet is the delivery of streaming audio and video. While an entire video file could first be fetched from a remote machine and then played on the local machine, similar to the process of downloading and displaying a web page, this would entail waiting for the last second of the video file to be delivered before starting to look at it. Streaming video implies that the sender and the receiver are, respectively, the source and the sink for the video stream. That is, the source generates a video stream (perhaps using a video capture card), sends it across the Internet in messages, and the sink displays the stream as it arrives. There are a variety of different classes of video applications. One class of video application is video-on-demand, which reads a pre-existing movie from disk and transmits it over the network. Another kind of application is videoconferencing, which is in some ways the more challenging (and, for networking people, interesting) case because it has very tight timing constraints. Just as when using the telephone, the interactions among the participants must be timely. When a person at one end gestures, then that action must be displayed at the other end as quickly as possible. Too much delay makes the system unusable. Contrast this with video-on-demand where, if it takes several seconds from the time the user starts the video until the first image is displayed, the service is still deemed satisfactory. Also, interactive video usually implies that video is flowing in both directions, while a video-on-demand application is most likely sending video in only one direction. One pioneering example of a videoconferencing tool, developed in the early and mid-1990s, is vic. shows the control panel for a vic session. vic is actually one of a suite of conferencing tools designed at Lawrence Berkeley Laboratory and UC Berkeley. The others include a whiteboard application (wb) that allows users to send sketches and slides to each other, a visual audio tool called vat, and a session directory (sdr) that is used to create and advertise videoconferences. All these tools run on Unix—hence their lowercase names—and are freely available on the Internet. Many similar tools are available for other operating systems. It is interesting to note that while video over the Internet is still considered to be in its relative infancy at the time of this writing (2006), that the tools to support video over IP have existed for well over a decade. Although they are just two examples, downloading pages from the Web and participating in a videoconference demonstrate the diversity of applications that can be built on top of the Internet, and hint at the complexity of the Internets design. Starting from the beginning, and addressing one problem at time, the rest of this book explains how to build a network that supports such a wide range of applications. Chapter 9 concludes the book by revisiting these two specific applications, as well as several others that have become popular on todays Internet. 1.2 REQUIREMENTS We have just established an ambitious goal for ourselves: to understand how to build a computer network from the ground up. Our approach to accomplishing this goal will be to start from first principles, and then ask the kinds of questions we would naturally ask if building an actual network. At each step, we will use todays protocols to illustrate various design choices available to us, but we will not accept these existing artifacts as gospel. Instead, we will be asking (and answering) the question of why networks are designed the way they are. While it is tempting to settle for just understanding the way its done today, it is important to recognize the underlying concepts because networks are constantly changing as the technology evolves and new applications are invented. It is our experience that once you understand the fundamental ideas, any new protocol that you are confronted with will be relatively easy to digest. The first step is to identify the set of constraints and requirements that influence network design. Before getting started, however, it is important to understand that the expectations you have of a network depend on your perspective: An application programmer would list the services that his application needs, for example, a guarantee that each message the application sends will be delivered without error within a certain amount of time. A network designer would list the properties of a cost-effective design, for example, that network resources are efficiently utilized and fairly allocated to different users. A network provider would list the characteristics of a system that is easy to administer and manage, for example, in which faults can be easily isolated and whereitiseasytoaccountfor usage. This section attempts to distill these different perspectives into a high-level introduction to the major considerations that drive network design, and in doing so, identifies the challenges addressed throughout the rest of this book. 1.2.1 Connectivity Starting with the obvious, a network must provide connectivity among a set of computers. Sometimes it is enough to build a limited network that connects only a few select machines. In fact, for reasons of privacy and security, many private (corporate) networks have the explicit goal of limiting the set of machines that are connected. In contrast, other networks (of which the Internet is the prime example) are designed to grow in a way that allows them the potential to connect all the computers in the world. A system that is designed to support growth to an arbitrarily large size is said to scale. Using the Internet as a model, this book addresses the challenge of scalability. Links, Nodes, and Clouds Network connectivity occurs at many different levels. At the lowest level, a network can consist of two or more computers directly connected by some physical medium, such as a coaxial cable or an optical fiber. We call such a physical medium a link,and we often refer to the computers it connects as nodes. (Sometimes a node is a more specialized piece of hardware rather than a computer, but we overlook that distinction for the purposes of this discussion.) As illustrated in, physical links are sometimes limited to a pair of nodes (such a link is said to be point-to-point), while in other cases, more than two nodes may share a single physical link (such a link is said to be multiple-access). Whether a given link supports point-to-point or multiple-access connectivity depends on how the node is attached to the link. It is also the case that multiple-access links are often limited in size, in terms of both the geographical distance they can cover and the number of nodes they can connect. If computer networks were limited to situations in which all nodes are directly connected to each other over a common physical medium, then networks would either be very limited in the number of computers they could connect, or the number of wires coming out of the back of each node would quickly become both unmanageable and very expensive. Fortunately, connectivity between two nodes does not necessarily imply a direct physical connection between them—indirect connectivity may be achieved among a set of cooperating nodes. Consider the following two examples of how a collection of computers can be indirectly connected. shows a set of nodes, each of which is attached to one or more point- to-point links. Those nodes that are attached to at least two links run software that forwards data received on one link out on another. If organized in a systematic way, these forwarding nodes form a switched network. There are numerous types of switched networks, of which the two most common are circuit-switched and packet-switched. The former is most notably employed by the telephone system, while the latter is used for the overwhelming majority of computer networks and will be the focus of this book. The important feature of packet-switched networks is that the nodes in such a network send discrete blocks of data to each other. Think of these blocks of data as corresponding to some piece of application data such as a file, a piece of email, or an image. We call each block of data either a packet or a message, and for now we use these terms interchangeably; we discuss the reason they are not always the same in Section 1.2.2. Packet-switched networks typically use a strategy called store-and-forward. As the name suggests, each node in a store-and-forward network first receives a complete packet over some link, stores the packet in its internal memory, and then forwards the complete packet to the next node. In contrast, a circuit-switched network first establishes a dedicated circuit across a sequence of links and then allows the source node to send a stream of bits across this circuit to a destination node. The major reason for using packet switching rather than circuit switching in a computer network is efficiency, discussed in the next subsection. The cloud in distinguishes between the nodes on the inside that implement the network (they are commonly called switches, and their primary function is to store and forward packets) and the nodes on the outside of the cloud that use the network (they are commonly called hosts, and they support users and run application programs). Also note that the cloud in is one of the most important icons of computer networking. In general, we use a cloud to denote any type of network, whether it is a single point-to-point link, a multiple-access link, or a switched network. Thus, whenever you see a cloud used in a figure, you can think of it as a placeholder for any of the networking technologies covered in this book. A second way in which a set of computers can be indirectly connected is shown in . In this situation, a set of independent networks (clouds) are interconnected to form an internetwork, or internet for short. We adopt the Internets convention of referring to a generic internetwork of networks as a lowercase i internet, and the currently operational TCP/IP Internet as the capital I Internet. A node that is connected to two or more networks is commonly called a router or gateway, and it plays much the same role as a switch—it forwards messages from one network to another. Note that an internet can itself be viewed as another kind of network, which means that an internet can be built from an interconnection of internets. Thus, we can recursively build arbitrarily large networks by interconnecting clouds to form larger clouds. Just because a set of hosts are directly or indirectly connected to each other does not mean that we have succeeded in providing host-to-host connectivity. The final requirement is that each node must be able to state which of the other nodes on the network it wants to communicate with. This is done by assigning an address to each node. An address is a byte string that identifies a node; that is, the network can use a nodes address to distinguish it from the other nodes connected to the network. When a source node wants the network to deliver a message to a certain destination node, it specifies the address of the destination node. If the sending and receiving nodes are not directly connected, then the switches and routers of the network use this address to decide how to forward the message toward the destination. The process of determining systematically how to forward messages toward the destination node based on its address is called routing. This brief introduction to addressing and routing has presumed that the source node wants to send a message to a single destination node (unicast). While this is the most common scenario, it is also possible that the source node might want to broadcast a message to all the nodes on the network. Or a source node might want to send a message to some subset of the other nodes, but not all of them, a situation called multicast. Thus, in addition to node-specific addresses, another requirement of a network is that it supports multicast and broadcast addresses. The main idea to take away from this discussion is that we can define a network recursively as consisting of two or more nodes connected by a physical link, or as two or more networks connected by a node. In other words, a network can be constructed from a nesting of networks, where at the bottom level, the network is implemented by some physical medium. One of the key challenges in providing network connectivity is to define an address for each node that is reachable on the network (including support for broadcast and multicast connectivity), and to be able to use this address to route messages toward the appropriate destination node(s). 1.2.2 Cost-Effective Resource Sharing As stated above, this book focuses on packet-switched networks. This section explains the key requirement of computer networks—efficiency—that leads us to packet switching as the strategy of choice. Given a collection of nodes indirectly connected by a nesting of networks, it is possible for any pair of hosts to send messages to each other across a sequence of links and nodes. Of course, we want to do more than support just one pair of communicating hosts—we want to provide all pairs of hosts with the ability to exchange messages. The question, then, is how do all the hosts that want to communicate share the network, especially if they want to use it at the same time? And, as if that problem isnt hard enough, how do several hosts share the same link when they all want to use it at the same time? To understand how hosts share a network, we need to introduce a fundamental concept, multiplexing, which means that a system resource is shared among multiple users. At an intuitive level, multiplexing can be explained by analogy to a timesharing computer system, where a single physical CPU is shared (multiplexed) among multiple jobs, each of which believes it has its own private processor. Similarly, data being sent by multiple users can be multiplexed over the physical links that make up a network. To see how this might work, consider the simple network illustrated in , where the three hosts on the left side of the network (senders S1S3) are sending data to the three hosts on the right (receivers R1R3) by sharing a switched network that contains only one physical link. (For simplicity, assume that host S1 is sending data to host R1, and so on.) In this situation, three flows of data—corresponding to the three pairs of hosts—are multiplexed onto a single physical link by switch 1 and then demultiplexed back into separate flows by switch 2. Note that we are being intentionally vague about exactly what a flow of data corresponds to. For the purposes of this discussion, assume that each host on the left has a large supply of data that it wants to send to its counterpart on the right. There are several different methods for multiplexing multiple flows onto one physical link. One common method is synchronous time-division multiplexing (STDM). The idea of STDM is to divide time into equal-sized quanta and, in a round-robin fashion, give each flow a chance to send its data over the physical link. In other words, during time quantum 1, data from S1 to R1 is transmitted; during time quantum 2, data from S2 to R2 is transmitted; in quantum 3, S3 sends data to R3. At this point, the first flow (S1 to R1) gets to go again, and the process repeats. Another method is frequency-division multiplexing (FDM). The idea of FDM is to transmit each flow over the physical link at a different frequency, much the same way that the signals for different TV stations are transmitted at a different frequency on a physical cable TV link. Although simple to understand, both STDM and FDM are limited in two ways. First, if one of the flows (host pairs) does not have any data to send, its share of the physical link—that is, its time quantum or its frequency—remains idle, even if one of the other flows has data to transmit. For example, S3 had to wait its turn behind S1 and S2 in the previous paragraph, even if S1 and S2 had nothing to send. For computer communication, the amount of time that a link is idle can be very large—for example, consider the amount of time you spend reading a web page (leaving the link idle) compared to the time you spend fetching the page. Second, both STDM and FDM are limited to situations in which the maximum number of flows is fixed and known ahead of time. It is not practical to resize the quantum or to add additional quanta in the case of STDM or to add new frequencies in the case of FDM. The form of multiplexing that we make most use of in this book is called statistical multiplexing. Although the name is not all that helpful for understanding the concept, statistical multiplexing is really quite simple, with two key ideas. First, it is like STDM in that the physical link is shared over time—first data from one flow is transmitted over the physical link, then data from another flow is transmitted, and so on. Unlike STDM, however, data is transmitted from each flow on demand rather than during a predetermined time slot. Thus, if only one flow has data to send, it gets to transmit that data without waiting for its quantum to come around and thus without having to watch the quanta assigned to the other flows go by unused. It is this avoidance of idle time that gives packet switching its efficiency. As defined so far, however, statistical multiplexing has no mechanism to ensure that all the flows eventually get their turn to transmit over the physical link. That is, once a flow begins sending data, we need some way to limit the transmission, so that the other flows can have a turn. To account for this need, statistical multiplexing defines an upper bound on the size of the block of data that each flow is permitted to transmit at a given time. This limited-size block of data is typically referred to as a packet, to distinguish it from the arbitrarily large message that an application program might want to transmit. Because a packet-switched network limits the maximum size of packets, a host may not be able to send a complete message in one packet. The source may need to fragment the message into several packets, with the receiver reassembling the packets back into the original message. In other words, each flow sends a sequence of packets over the physical link, with a decision made on a packet-by-packet basis as to which flows packet to send next. Notice that if only one flow has data to send, then it can send a sequence of packets back-to-back. However, should more than one of the flows have data to send, then their packets are interleaved on the link. depicts a switch multiplexing packets from multiple sources onto a single shared link. The decision as to which packet to send next on a shared link can be made in a number of different ways. For example, in a network consisting of switches interconnected by links such as the one in the decision would be made by the switch that transmits packets onto the shared link. (As we will see later, not all packet-switched networks actually involve switches, and they may use other mechanisms to determine whose packet goes onto the link next.) Each switch in a packet-switched network makes this decision independently, on a packet-by-packet basis. One of the issues that faces a network designer is how to make this decision in a fair manner. For example, a switch could be designed to service packets on a first-in-first-out (FIFO) basis. Another approach would be to transmit the packets from each of the different flows that are currently sending data through the switch in a round-robin manner. This might be done to ensure that certain flows receive a particular share of the links b andwidth, or that they never have their packets delayed in the switch for more than a certain length of time. A network that attempts to allocate bandwidth to particular flows is sometimes said to support quality of service (QoS), a topic that we return to in Chapter 6. Also, notice in that since the switch has to multiplex three incoming packet streams onto one outgoing link, it is possible that the switch will receive packets faster than the shared link can accommodate. In this case, the switch is forced to buffer these packets in its memory. Should a switch receive packets faster than it can send them for an extended period of time, then the switch will eventually run out of buffer space, and some packets will have to be dropped. When a switch is operating in this state, it is said to be congested. The bottom line is that statistical multiplexing defines a cost-effective way for multiple users (e.g., host-to-host flows of data) to share network resources (links and nodes) in a fine-grained manner. It defines the packet as the granularity with which the links of the network are allocated to different flows, with each switch able to schedule the use of the physical links it is connected to on a per-packet basis. Fairly allocating link capacity to different flows and dealing with congestion when it occurs are the key challenges of statistical multiplexing. 1.2.3 Support for Common Services While the previous section outlined the challenges involved in providing costeffective connectivity among a group of hosts, it is overly simplistic to view a computer network as simply delivering packets among a collection of computers. It is more accurate to think of a network as providing the means for a set of application processes that are distributed over those computers to communicate. In other words, the next requirement of a computer network is that the application programs running on the hosts connected to the network must be able to communicate in a meaningful way. When two application programs need to communicate with each other, there are a lot of complicated things that need to happen beyond simply sending a message from one host to another. One option would be for application designers to build all that complicated functionality into each application program. However, since many applications need common services, it is much more logical to implement those common services once and then to let the application designer build the application using those services. The challenge for a network designer is to identify the right set of common services. The goal is to hide the complexity of the network from the application without overly constraining the application designer. Intuitively, we view the network as providing logical channels over which application-level processes can communicate with each other; each channel provides the set of services required by that application. In other words, just as we use a cloud to abstractly represent connectivity among a set of computers, we now think of a channel as connecting one process to another. shows a pair of application-level processes communicating over a logical channel that is, in turn, implemented on top of a cloud that connects a set of hosts. We can think of the channel as being like a pipe connecting two applications, so that a sending application can put data in one end and expect that data to be delivered by the network to the application at the other end of the pipe. Thechallengeistorecognize what functionality the channels should provide to application programs. For example, does the application require a guarantee that messages sent over the channel are delivered, or is it acceptable if some messages fail to arrive? Is it necessary that messages arrive at the recipient process in the same order in which they are sent, or does the recipient not care about the order in which messages arrive? Does the network need to ensure that no third parties are able to eavesdrop on the channel, or is privacy not a concern? In general, a network provides a variety of different types of channels, with each application selecting the type that best meets its needs. The rest of this section illustrates the thinking involved in defining useful channels. Identifying Common Communication Patterns Designing abstract channels involves first understanding the communication needs of a representative collection of applications, then extracting their common communication requirements, and finally incorporating the functionality that meets these requirements in the network. One of the earliest applications supported on any networ