Thursday, September 3, 2020

Wavelet Packet Feature Extraction And Support Vector Machine Psychology Essay

Wavelet Packet Feature Extraction And Support Vector Machine Psychology Essay Unique The point of this work is a programmed order of the electroencephalogram (EEG) signals by utilizing measurable highlights extraction and bolster vector machine. From a genuine database, two arrangements of EEG signals are utilized: EEG recorded from a solid individual and from an epileptic individual during epileptic seizures. Three significant factual highlights are figured at various sub-groups discrete wavelet and wavelet bundle decay of EEG chronicles. In this examination, to choose the best wavelet for our application, five wavelet premise capacities are considered for preparing EEG signals. In the wake of diminishing the element of the acquired information by straight discriminant investigation and head segment examination, highlight vectors are utilized to show and to prepare the proficient help vector machine classifier. So as to show the proficiency of this methodology, the measurable arrangement exhibitions are assessed, and a pace of 100% for the best grouping exact ness is acquired and is contrasted and those got in different examinations for similar informational index. Watchwords EEG; Discrete Wavelet Transform, Wavelet Packet Transform, Support Vector Machine, Statistical investigation, order. 1. Presentation In nervous system science, the electroencephalogram (EEG) is a non-intrusive trial of mind work that is for the most part utilized for the analysis and arrangement of epilepsy. The epilepsy scenes are an aftereffect of over the top electrical releases in a gathering of synapses. Epilepsy is a constant neurological issue of the mind that effects more than 50 million individuals worldwide and in creating nations, three fourths of individuals with epilepsy may not get the treatment they need [1]. In clinical choices, the EEG is identified with inception of treatment to improve nature of epileptic patients life. In any case, EEG signals possess a gigantic volume and the scoring of long haul EEG chronicles by visual examination, so as to order epilepsy, is generally a tedious assignment. In this way, numerous scientists have tended to the issue of programmed identification and characterization of epileptic EEG signals [2, 3]. Various investigations have demonstrated that EEG signal is a n on-fixed procedure and non-straight highlights are extricated from mind action accounts so as to explicit sign qualities [2, 4, 5, 6]. At that point these highlights are utilized as contribution of classifiers [11]. Subasi in [7] utilized the discrete wavelet change (DWT) coefficient of ordinary and epileptic EEG fragments in a particular neural system called blend of master. For a similar EEG informational index, Polat and Gã ¼nes [8] utilized the component decrease techniques including DWT, autoregressive and discrete Fourier change. In Subasi and Gursoy [9], the dimensionality of the DWT highlights was decreased utilizing head segment investigation (PCA), free part examination (ICA) and straight discriminant investigation (LDA). The resultant highlights were utilized to arrange ordinary and epilepsy EEG signals utilizing bolster vector machine. Jahankhani, Kodogiannis and Revett [10] have acquired component vectors from EEG flags by DWT and played out the order by multilayer perceptron (MLP) and spiral premise work arrange. Wavelet bundle change (WPT) shows up as one of most encouraging strategies as appeared by an extraordinary number of works in the writing [11] especially for ECG signals and generally less, for EEG signals. In [12], Wang, Miao and Xie utilized wavelet parcel entropy technique to remove highlights and K-closest neighbor (K-NN) classifier. In this work, both DWT and WPT split non fixed EEG signals into recurrence sub-groups. At that point a lot of measurable highlights, for example, standard deviation, vitality and entropy from genuine datab ase EEG accounts were figured from every disintegration level to speak to time-recurrence dissemination of wavelet coefficients. LDA and PCA are applied to these different boundaries permitting an information decrease. These highlights were utilized as a contribution to productive SVM classifier with two discrete yields: ordinary individual and epileptic subject. A proportion of the exhibitions of these techniques is introduced. The staying of this paper is sorted out as follows: Section 2 depicts the informational index of EEG signals utilized in our work. In Section 3, primers are introduced for sure fire reference. This is trailed by the progression up of our trials and the outcomes in area 4. At long last, some closing comments are given in Section 5. 2. Information SELECTION We have utilized the EEG information taken from the ancient rarity free EEG time arrangement database accessible at the Department of Epileptology, University of Bonn [23]. The total dataset comprises of five sets (meant A-B-C-D-E). Each set contains100 single-channel EEG signs of 23,6s. The ordinary EEG information was gotten from five solid volunteers who were in the casual conscious state with their eyes open (set A). These signs were acquired from extra-cranially surface EEG chronicles as per a normalized cathode position. Set E contains seizure movement, chose from all chronicle locales showing ictal action. All EEG signals were recorded with the equivalent 128 channel speaker framework and digitized at 173.61Hz examining. 12 piece simple to-computerized change and band-pass (0.53-40 Hz) channel settings were utilized. For a progressively nitty gritty portrayal, the peruser can allude to [13]. In our examination, we utilized set An and set E from the total dataset. Crude EEG signal Highlight extraction: Energy, Entropy and Standard deviation from DWT and WPT decom-position coefficients Dimensionality decrease by LDA and PCA Grouping and Execution measure Solid Epileptic Figure 1 The stream graph of the proposed framework 3. techniques The proposed technique comprises of three primary parts: (I) factual element extraction from DWT and from WPT disintegration coefficients, (ii) dimensionality decrease utilizing PCA and LDA, and (iii) EEG characterization utilizing SVM. The stream diagram of the proposed strategy is given in figure 1. Subtleties of the pre-handling and arrangement steps are inspected in the accompanying subsections. 3.1 Analysis utilizing DWT and WPT Since the EEG is a profoundly non-fixed sign, it has been as of late suggested the utilization of time-recurrence area techniques [14]. Wavelet change can be utilized to break down a sign into sub-groups with low recurrence (estimated coefficients) and sub-groups with high recurrence (definite coefficients) [15, 16, 17]. Under discrete wavelet change (DWT), just estimation coefficients are decayed iteratively by two channels and afterward down-examined by 2. The main channel h[.] is a high-pass channel which is the reflection of the subsequent low pass channel l[.]. DWT gives a left recursive double tree structure. We prepared 16 DWT coefficients. Wavelet parcel change (WPT) is an augmentation of DWT that gives a progressively educational sign examination. By utilizing WPT, the lower, just as the higher recurrence groups are disintegrated giving a reasonable tree structure. The wavelet parcel change creates a full deterioration tree, as appeared in figure 2. In this work, we performe d five-level wavelet parcel deterioration. The two wavelet parcel symmetrical bases at a parent hub (I, p) are gotten from the accompanying recursive connections Eq. (1) and (2), where l[n] and h[n] are low (scale) and high (wavelet) pass channel, individually; I is the file of a subspaces profundity and p is the quantity of subspaces [15]. The wavelet bundle coefficients comparing to the sign x(t) can be acquired from Eq. (3), l (3,0) (3,1)†¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦(3,6) (3,7) h l h l h l h h l h l h l SIGNAL (0,0) (1,0) (1,1) (2,0) (2,1) (2,2) (2,3) Figure 2 Third level wavelet bundle disintegration of EEG signal Table 1 gives the recurrence groups for each degree of WPT decay. Figures 3 and 4 show the fifth level wavelet parcel decay of EEG sections, as indicated by figure 2. We prepared 32 WPT coefficients. Along these lines, in this examination, three factual boundaries: vitality highlight (En), the proportion of Shannon entropy (Ent) and standard deviation (Std) are figured, (4) (5) (6) 3.2 Principal part examination To make a classifier framework progressively successful, we use head segment examination (PCA) for dimensionality decrease. The motivation behind its execution is to determine few uncorrelated head segments from a bigger arrangement of zero-mean factors, holding the most extreme conceivable measure of data from the first information. Officially, the most widely recognized induction of PCA is as far as normalized direct projection, which amplifies the change in the anticipated space [18, 19]. For a given p-dimensional informational collection X, the m chief tomahawks W1,†¦,Wm where 1≠¤ m≠¤ p, are symmetrical tomahawks onto which the held change is most extreme in the anticipated space. By and large, W1,†¦,Wm can be given by the m driving eigenvectors of the example Table1 Frequency band of every wavelet decay level. Deterioration level Recurrence band (Hz) 1 2 3 4 5 0-86.8; 86.8-173.6 0-43.5; 43.5-86.8; 86.3 ;130.2-173.6 0-21.75; 21.75-43.5; 43.5-54.375; 54.375-86.3; 86.3-108.05; 108.05-130.2; 130.2-151.95; 151.95-173.6; 0-10.875; 10.875-21.75; 21.75-32.625; 32.625-43.5; 43.5-54.375; 54.375-65.25; 65.25-76.125; 76.125-87; 87-97.875; 97.875-108.75; 108.75-119.625; 119.625-130.5; 130.5-141.375; 141.375-152.25; 152.25-163.125; 163.125-173.6 0-5.44; 5.44-10.875; 10.875-16.31; 16.31-21.75: 21.75-27.19; 27.19-32.625; 32.625-38.06; 38.06-43.5; 43.5-48.94; 48.94-54.375; 54.375-59.81; 59.81-65.25; 65.25-70.69; 70.69-76.125; 76.125-81.56;81.56-87; 87-92.44; 92.44-97.87; 97.87-103.3; 103.3-108.75; 108.75-114.19; 114.19-119.625; 119.625-125.06; 125.06-130.5; 130.5-135.94; 135.94-141.38; 141.38-146.81; 146.81-152.25; 152.25-157.69; 157.69-163.125; 163.125-168.56; 168.56-173.6 covariance framework where is the example mean and N is the quantity of sam

Saturday, August 22, 2020

A Speculative View of American History to 1876 :: Essays Papers

A Speculative View of American History to 1876 The individuals who don't consider history are bound to rehash it. Human instinct is one of interest; we are not content with the shallow faã §ade of our reality. Or maybe, we need understanding. We have to know not just expertise we have come to be who we are as a people, yet more significantly why we are, and where, as a general public, we are bound to end. The solution to our constant inquiry of presence lies from quite a while ago. We should look past the simple truthful record of occasions which contains our history, and adopt on a progressively theoretical strategy, and examine the way of thinking of history: for our situation, American history. The world has seen a wide range of authentic methods of reasoning all through time. Two differentiating limits of authentic way of thinking were those of old Greece and Rome, who bought in to the Stoic recurrent perspective on history, and Immanuel Kant’s thought of Progress. Karl Marx, in the eighteenth century, set up his comm unist thoughts in a volume he co-composed, The Communist Manifesto. The authentic way of thinking, be that as it may, which best clarifies the main portion of American history, from its introduction to the world in Europe, to the common war, is that of Augustine. Augustine’s hypothesis of history can be identified in his significant work, The City of God, where he clarifies his idea of the City of Man versus the City of God: â€Å"Accordingly, two urban communities have been shaped by two loves; the hearty love of oneself, even to the scorn of God; the radiant by the affection for God, even to the disdain of self. The previous, in a word, wonders in itself, the last in the Lord.†1 As Ronald Nash explains: Augustine clarifies that the two urban areas will coincide through mankind's history, even inside the limits of maintaining Christendom. Just at the last judgment, which finishes mankind's history, will the two urban areas at long last be isolated, all together that they may share their named predeterminations of paradise and hellfire. What represents people’s position in either city is the object of their affection. Individuals have a place with the City of God by temperance of their affection for God; the remainder of mankind has a place with the City of Man as a result of their â€Å"love of self, even to the disdain of God.†2 This momentous work3 initially started as a reaction to the allegation of Rome’s Christian change at last adding to its sack by Alaric and his Goths.

Influence of Computer Information Technology

Question: Examine about the Influence of Computer for Information Technology. Answer: Presentation Self-viability assumes a critical job in one's life. It is basic to pass judgment on ones abilities. Be that as it may, viability and self-adequacy contrast from one another and must not be confounded. The term viability is utilized in the field of medication and pharmacology. Adequacy alludes to the most extreme reaction accomplished from a dosed specialist. Then again, self-adequacy is simply the conviction to accomplish ones objectives. Adequacy is clinical term while self-viability is a term in the field of brain research or human science. Adequacy is a restorative impact of a clinical gadget, tranquilize, or surgery. Be that as it may, both adequacy and self-viability may have positive just as negative effect on a people wellbeing. The ascent in self-adequacy levels would help the understudies over the long haul. For the most part, the precise estimation of self-viability gives a premise whereupon one can assess alignment precisely. This adjustment is vital for IT understudies. The ascent in self-adequacy levels would upgrade execution, and everybody would profit by this undertaking. The report plans to give an inside and out investigation of self-viability. Self-viability Self-viability is characterized as the degree to which an individual can adhere to his/her choices. It is a conviction, which an individual has his/her own capacity to finish doled out errands and arrive at objectives. There has been broad research on self-adequacy by therapists. Self-adequacy influences the brain science, inspiration and standard of conduct of an individual generally. There are a few speculations determined by therapists on self-viability. Clearly, individuals feel that they can accomplish all objectives that they set. In any case, the greater part of them neglect to place their arrangements enthusiastically. With time, individuals understand that it is difficult to achieve those undertakings basically. Self-viability assumes a critical job in a people life for it significantly influences a people point of view and inspiration. Self-adequacy helps an individual in confronting the difficulties and consider them to be errands that they should ace. People with self-viability attempt to determine the issues they face instead of dodging them. They center around a mind-blowing exercises and have a solid feeling of responsibility. Then again, individuals with low self-viability dread about the difficulties throughout everyday life and consider them to be potential dangers. They stay uncertain to their objectives and frequently neglect to achieve them. They lose confidence and certainty rapidly and consistently contemplate the troublesome circumstances of life. They experience the ill effects of low confidence and experience the ill effects of pressure and wretchedness. They learn and accomplish significantly less contrasted with individuals with high self-viability. In this way, an elevate d feeling of self-adequacy permits an individual to recoup rapidly from frustrations and mishaps. They are more effective than individuals with low self-adequacy. Four factors essentially influence the self-adequacy. They are enactive fulfillment or experience, vicarious experience or displaying, social influence, and physiology. Condition and qualities additionally influence self-viability. The self-adequacy of an individual is gravely hampered by pressure and exhibits his/her powerlessness. There are various hypothetical methodologies of self-adequacy like the social psychological hypothesis, self-idea hypothesis, social learning hypothesis, and the attribution hypothesis. The social intellectual hypothesis stresses the job that social experience and observational learning plays for building up the character. Proposed by Albert Bandura, social psychological hypothesis shows how watching activities of others impacts a people subjective procedures, social practices, and individual activities or responses (John 2013). Outside encounters have a huge impact in deciding self-adequacy of an individual. Social learning hypothesis explains obtaining aptitudes grew fundamentally from a social gathering. Social learning creates individual abilities and individual feelings and in this way encourages a person to acknowledge others. It adds to see the self in a superior manner as individuals learn by copying others, or through perception. Self-viability outlines a people comprehension and commitment in a gathering. Self-idea hypothesis endeavors to account for self-idea as something that is sorted out, learned, and ever evolving. The presence of an individual is deciphered through outer sources and spotlights on how the impressions stay alive for the duration of the life. The focal point of attribution hypothesis is on the attribution of occasions. Inside locus upgrades or lessens self-adequacy with progress or disappointment separately. Locus, dependability, and controllability structure the three expansive components of attribution hypothesis. As indicated by the attribution hypothesis, the nonappearance to adapt up prompts a backslide and diminished self-adequacy. Failure to settle on a choice is an aftereffect of low confidence and must be tended to and settled. Writing survey Self-adequacy in IT This investigation analyzed the impact of self-viability in the Information innovation part and in attention to objectives. The examination was done dependent on objective setting hypothesis, inspiration arrangement model, and the social subjective hypothesis. The examination researched such connections utilizing the ERP, that is, the Enterprise Resource Planning framework. The investigation gave a more profound understanding to the ERP framework scientists into the significance of self-adequacy. As both IT self-adequacy and objective mindfulness affect work fulfillment. Self-adequacy is identified with and basic in utilizing email. Individuals with high self-adequacy will undoubtedly utilize PCs more than people with less self-viability. Self-viability assumes a positive job in building up relationship between a representative and remote worker. Also, individuals with increasingly self-viability in utilizing data innovation have more work result. Likewise, the level self-viability h ave various inclinations for utilizing various advances. People with high self-adequacy are increasingly proficient in utilizing present day innovation (Al-Haderi 2013). The exposition planned for looking at the degree of self-viability of educators who utilize Interactive innovation in language instructing. The name of the task is Interactive Technologies in Language Teaching. The point of the undertaking was to deliver assets and preparing materials for educators with the goal that they can utilize intelligent whiteboard innovation in unknown dialect instructing. The undertaking includes support of seven European nations France, Netherlands, Spain, Belgium, Wales, Germany, and Turkey. There was very little variety in the reactions that the exploration group got from various nations. It was seen that educators had high data and correspondence advances however low self-adequacy with certain apparatuses and highlights of IWB. Self-viability accordingly assumes an essential job in using such innovative instruments. As indicated by the creator, saw convenience and saw handiness are the deciding variables of innovative use. A people elevated level of sel f-viability emphatically impacts the acknowledgment of innovation. An instructors impression of innovation and uplifted self-adequacy helps in a contextualized and more profound comprehension of innovation. It helps in better acknowledgment of innovation (Greenhow and Askari 2015). This article depends on an examination that was directed in embracing of person to person communication locales. It estimated the impact self-viability in tolerating the utilization of innovation. 255 individuals from Thailand and Bangkok were chosen for the reason. The information was dissected through auxiliary condition demonstrating strategies. The examination uncovered that PC experience and information impacted a people self-adequacy. The opposite is likewise obvious. Social elements don't impact much in the improvement of self-adequacy if there should arise an occurrence of PC innovation. Truth be told, self-adequacy straightforwardly impacts helpfulness and in a roundabout way impacts the goal of a person to utilize data innovation. Long range interpersonal communication locales like facebook, twitter, YouTube, or LinkedIn impacts a people self-viability level in playing out a PC related assignment (Sheng et al. 2012). In this article, an investigation was directed by Park and his partners to recognize the significance of self-viability and the impact that it might have on the conviction of people. Self-adequacy characterizes and makes a decision about a clients capacity to perform business related to data innovation. It was discovered that mechanical availability, self-adequacy, and ingenuity of an individual are more significant deciding elements than segment ones like sexual orientation, age, or training. It was additionally uncovered that old clients with high self-viability acknowledge innovation more than new clients with low self-adequacy. Studies additionally uncover that self-viability affects utilization of a framework as they have an apparent feeling of simplicity in successfully utilizing the framework. High self-viability positively affect the utilization of mechanical frameworks (John 2013). The writing audit of four research work shows the significance of self-adequacy in the field of data innovation. Each examination shows that variety in self-adequacy levels influences the exhibition and yield of individuals to a great extent. The strategy to gauge self-viability levels is distinctive for each situation. The directed research measures and thinks about the exhibition of various individuals in various nation from different fields. The investigations show how self-adequacy levels may impact singular execution and profitability. There is consistently a positive connection between high self-adequacy, use of PCs, and utilization of present day mechanical apparatuses. It is additionally observed that representatives with low self-viability levels incorporate and look for innovation all the more adequately. End As self-viability and self-assurance assume an essential job in the field of data innovation, it is of most extreme significance that personnel gives understudies appropriate training. The ascent in self-adequacy levels would improve their presentation, and everybody would profit by this undertaking. The scor

Friday, August 21, 2020

The Spanish Flu in Remission :: Journalism Influenza Health Medical Essays

The Spanish Flu in Remission For some, it seems like there is at last motivation to take a profound moan of help. The savage Spanish Flu, presently accepted to have started on the front lines and in the military clinics of the war, gives off an impression of being abating. In the past two days the losses of life has gone from 302 down to 269, and it today arrived at a wonderful low of just 17. Still the representatives' warning board of trustees and our nearby Health Commission say that we should keep up the battle inasmuch as there is an instance of flu. The annihilation of this sickness is the duty of each resident and doesn't just rest with the doctor. Many are getting ready to broadcast a conclusion to this overwhelming ailment which has just killed millions around the world, and has constrained numerous Los Angeles occupants to confine themselves. The malady was known in the war zones in which it started as three-day fever,. From that point forward the name stuck. It can assault in a case all of a sudden, and leave those it taints dead in under seven days. John C. Acker, a Sergeant inside the 32nd Division American Expeditionary Force, portrayed the course of the disease in more noteworthy detail: It runs its course in possibly more than seven days. It hits out of nowhere and one's temperature almost pursues the mercury through the highest point of the M.D's. thermometer, face gets red, each bone in the body throbs and the head parts all the way open. The illness has been the wellspring of gigantic catastrophe, and tragically has killed a portion of our country's most prominent youngsters, who have effectively battled to push back impressive foe armed forces regardless of enormous challenges. After these fighters got back another fight went up against Uncle Sam, as the dull shadow of influenza guaranteed incalculable the lives of endless regular citizens and military staff. Luckily Los Angeles' response to this scourge has been quick. Isolate has been the official strategy. This week such estimates proceed as almost all midtown chapels report that they won't hold their ordinary Children's Sunday School classes since huge gatherings could jeopardize the lives of the adolescent. While the measures are just brief it has been one more indication of the bothers that isolate measures have created. The houses of worship which settled on this choice hail from practically every group.

Essay questions Assignment Example | Topics and Well Written Essays - 750 words

Paper questions - Assignment Example Constitution were all inclusive and ought to be imparted to everybody. Before the finish of the nineteenth century, the Monroe Doctrine was to come into full impact in a war with the Spanish. Not exclusively would the Americans assume responsibility for Spanish belongings in the Caribbean, for example, Cuba, Puerto Rica, and different islands, however as the war extended so would the transmit of the Monroe Doctrine. The United States would oversee the Philippines, a long way from its own shore, and endeavor to redo the Spanish expansionism political framework in its own picture. The outcome would be a bleeding strife battled with Filipino radicals that would take America numerous years to subdue. Following the American triumph over Spain and the taking of the Philippines, there was a lot of strain between the U.S. what's more, local people. This reached a critical stage in 1899 when American warriors shot a few Filipinos. Things immediately turned crazy with the two sides raising armed forces and battling regular wars. The Americans quickly vanquished the show Filipino powers, slaughtering two of their best officers and mollifying huge numbers of the urban regions. During this period, the President designated recognized Americans to examine conditions in the Philippines and report back on approaches to improve the organization of the nation. The first Commission’s report was a reply to the individuals who contended America had no spot in Southeast Asia: Should our capacity by any casualty be pulled back, the commission accept that the legislature of the Philippines would rapidly slip by into political agitation, which would pardon, in the event that it didn't require, the intercession of different forces and the possible division of the islands among them. Just through American occupation, hence, is simply the possibility of a free, overseeing, and joined Philippine district at all possible. What's more, the key need from the Filipino perspective of keeping up American sway over the archipelago is perceived by every single canny Filipino and

Saturday, June 27, 2020

Robert Mueller Investigation Essay (Free Example)

Robert Mueller Investigation Essay (Free Example) In this essay about special prosecutor Robert Mueller’s investigation into Russian interference in the 2016 presidential election, we provide an overview of the information available at the time of publication about that investigation.   The essay will explain who Robert Mueller is; what the investigation is trying to find; and why the investigation was instigated.   In addition, the essay will discuss the latest news, as of May 21, 2018, about the investigation.   In addition to discussing the Mueller investigation, the essay will provide you with a technical guide for writing academic essays.   In addition to being formatted in an appropriate academic style, it will include all of the standard parts of an academic essay, including: introduction, thesis statement, evidence and analysis in the body paragraphs, and a conclusion. Table of Contents1 Topics2 Titles3 Outline4 Introduction5 Essay Hook6 Thesis Statement7 Body7.1 Who is Mueller?7.2 What is the Mueller Investigation? 7.3 Why is Mueller Investigating the 2016 Presidential Election?7.4 What are the Latest Developments in the Investigation? 8 Conclusion9 Works Cited10 Closing Topics Robert Mueller- Robert Mueller is the special prosecutor appointed to look into Russian interference in the 2016 Presidential campaign.   As evidence increases suggesting that the Trump campaign colluded with Russia, Republicans have claimed the Mueller’s investigation is partisan.   An essay focusing on Mueller’s background would look into his political allegiances, prior work history, and other aspects that would support or discredit claims that he is behaving in a partisan manner. Christopher Steele- The ex-spy who claims that a Russian dossier exists that makes Trump vulnerable to blackmail by the Kremlin.   With allegations that the Trump campaign colluded with Russia to influence the 2016 presidential election, any interactions between Trump and Russian officials becomes of interest.   This includes allegations by ex-spy Christopher Steele that a dossier on Trump exists that details information, including a video tape of sexual activity with prostitutes, which could allow the Kremlin to influence Trump’s behavior.   This essay would not only discuss Christopher Steele’s history as an agent and the alleged contents of the dossier, but also whether those contents would actually give the Kremlin leverage over Trump. Election Influencing and the Internet-   The allegations that Russia influenced the 2016 presidential election are multi-faceted.   In addition to claims of possible tampering with election machines and votes, the claims include Russia’s role in using social media to influence the outcome of the election.   An essay on this topic would discuss how social media has been used to influence elections, both in the United States and abroad. Titles The Mueller Investigation Did the Trump Campaign Collude with Russia to Influence the 2016 Election? Is the Mueller Investigation a Partisan Witch-Hunt? Why the Mueller Investigation is Important to Ensure Election Integrity Outline I.   Introduction II.   Body a.   Who is Mueller? b.   What is the Mueller Investigation? c.   Why is Mueller investigating the 2016 presidential election? d.   What are the latest developments in the investigation? III.   Conclusion Introduction The Robert Mueller Investigation is a Special Counsel Investigation by United States law enforcement into whether there was any collusion between Donald Trump’s 2016 presidential campaign and any foreign power to influence the election.   The investigation specifically focuses on whether there was any collusion between the Trump campaign and the Russian government, which the investigation has already established made attempts to influence the campaign’s outcome.   The investigation was initially handled by the Federal Bureau of Investigation (FBI), but in May 2017 Robert Mueller was appointed as a United States Special Counsel to head the investigation. This led to the consolidation of many separate investigations, including investigations in the former chairman for the Trump campaign, Paul Manafort, and Trump’s former National Security Advisor, Michael Flynn.   While the focus of the investigation has been on election interference and collusion with Russia , it has also investigated possibly related areas, such as hush-money payments made by people connected to the Trump campaign to various individuals who have claimed sexual relationships with Trump.   I Essay Hook While President Trump or his surrogates have repeatedly called the Mueller Investigation a witch-hunt and suggested it has no foundation, the reality is that Mueller has not only found enough evidence to indict several members of Trump’s campaign or administration, but also that the investigation has already led to some guilty pleas. Thesis Statement While the political process and the fact that a sitting President cannot be indicted on federal charges, only impeached, may mean that Trump evades prosecution for collusion in the 2016 campaign, the results of the Mueller investigation have already proven a significant level of collusion to influence the 2016 campaign, between Trump’s associates and Russia. Body Who is Mueller? Special Investigator Robert Mueller may face accusations of engaging in partisan politics, but his history, both as a professional and as an individual, make those accusations appear spurious. Robert Mueller, a former US attorney, was the second-longest serving FBI director in history; in 2011, Congress voted to extend Mueller’s term by two years, so that he could serve a 12-year tenure, rather than a 10-year tenure (Kopan, 2017).   He was unanimously confirmed by the Senate in 2001, after being appointed by a Republican President, George W. Bush.   Likewise, in 2011, the vote to extend his term was unanimous.   He is considered by other law enforcement professionals to be among the best, if not the best, investigative prosecutor in the country, and has been responsible for a number of high-profile investigations.   He personally identifies as a Republican, politically, though there are no allegations that his personal political beliefs have influenced any of his actio ns as FBI director or US attorney. Robert Mueller What is the Mueller Investigation? While referred to as the Mueller Investigation in the press, the Mueller Investigation is more appropriately called the Special Counsel Investigation.  Ã‚   Mueller is the head of the investigation, but he was not responsible for beginning the investigation.   Instead, he was appointed by the Justice Department to investigate allegations that Russia influenced the 2016 Presidential campaign after the Trump Administration fired FBI director James Comey, who was investigating those allegations, and Attorney General Jeff Sessions recused himself from the investigation.   â€Å"As a special counsel, Mr. Mueller can choose whether to consult with or inform the Justice Department about his investigation†¦He is empowered to press criminal charges, and he can request additional resources subject to the review of an assistant attorney general† (Ruiz and Landler, 2017).   Therefore, the investigation is both independent of other law enforcement agencies, but also subject to oversight by the Attorney General’s office, which is, in turn, part of the executive branch.   This makes the investigation difficult, as the Trump Administration, which is the subject of the investigation, also retains the power to fire Mueller, and even to order that the Attorney General end the investigation and not name another special prosecutor if Mueller is fired.   Therefore, the Mueller investigation is best described as a semi-independent Justice Department investigation, headed by special prosecutor Robert Mueller, into allegations that President Trump’s 2016 campaign colluded with Russia to influence the outcome of the election. Why is Mueller Investigating the 2016 Presidential Election? There are multiple answers to the question about why Mueller is investigating the 2016 presidential election.   The first set of answers focuses on why Mueller is the individual in charge of the investigation.   The second set of answers focuses on why there is an investigation.   The answers are related, because, if the reasons necessitating the selection of Special Counsel are accurate, they could help bolster claims of collusion between the Trump Campaign and Russia. Appointing a Special Counsel became necessary after President Trump fired FBI Director James Comey.   Trump reportedly asked Comey to end the investigation into Trump’s National Security Advisor Michael Flynn and allegations that Flynn had colluded with Russia in matters regarding the 2016 campaign.   At the time those allegations came to light, there was not evidence that Flynn was necessarily acting on behalf of Trump’s campaign.   When Comey refused to end the investigation into Flynn, Trump fired him from his position.   The acting Attorney General responded by naming Robert Mueller, a man known for his impartial and judicious approach to issues, as special counsel to investigate those claims of collusion. ? There are a number of reasons that collusion between the Russians and the Trump campaign is suspected.   It is already well-established that Russia expended significant efforts to influence the 2016 Presidential campaign, including a very determined effort to use social media, specifically Facebook, to influence voters.   In January of 2017, the Senate Select Committee on Intelligence launched a bipartisan probe of Russian meddling, which was sparked by a declassified report from the Director of National Intelligence, detailing efforts by Russian President Vladimir Putin, to help influence the campaign by smearing Democratic candidate Hillary Clinton.   This was followed by investigations by the House Permanent Select Committee on Intelligence, the Senate Judiciary Subcommittee on Crime and Terrorism, the House Committee on Oversight and Government Reform, the Senate Judiciary Committee, the F.B.I., and then the Justice Department.   The investigation started with allegations that Michael Flynn, who would become Trump’s National Security Advisor; Paul Manafort, who would become chairman of the Trump campaign; Jared Kushner, Trump’s son-in-law; and Donald Trump, Jr., all had contact with Russians who were giving them or offering to give them damaging information about Hillary Clinton in order to damage her campaign.   Later, hackers break into DNC computers, steal oppositional research on Trump, and getting access to DNC emails. Donald Trump What are the Latest Developments in the Investigation? As of May 21, 2018, the investigation had resulted in a number of indictments, guilty pleas, and convictions. Former Trump campaign advisor George Papadopoulos has pled guilty to making false statements to the FBI about his contacts with Russian official while working for Trump’s campaign.   Michael Flynn pleaded guilty to making false statements to the FBI.   Three other people have entered guilty pleas in relation to the investigation: Paul Manafort’s business partner Rick Gates, Richard Pineo, and Alex van der Zwaan.   Mueller has indicted Paul Manafort, thirteen Russian citizens, and three Russian entities on a variety of charges linked to the investigation. [ several sections of this essay are missing, click here to view the entire essay ] Conclusion The Mueller Investigation began as a series of several investigations into whether Russia interfered with the 2016 Presidential election, with the goal of damaging Hillary Clinton’s campaign.   Initially not linked to the Trump campaign, the investigation was merely looking at Russian hacking efforts.   However, as the investigation unfolded, it revealed an increasing amount of evidence suggesting that the Trump campaign, if not Trump himself, was aware of the interference.   This evidence became particularly troublesome when Trump began to make efforts to discredit the investigation and even fired James Comey because he refused to discontinue the investigation into Flynn.   While characterized as a witch-hunt by Trump and his surrogates, the investigation has revealed enough evidence to result in several guilty pleas and even more indictments.   As the investigation continues to uncover evidence, and as more witnesses cooperate with the investigation, it is possible that it will either lead to Trump firing the special investigator or to substantial evidence that Trump, himself, colluded with the Russians.   Either result could lead to a constitutional crisis unlike anything seen in the United States since the Watergate scandal. Works Cited Kopan, Tal.   â€Å"Who Is Robert Mueller?†Ã‚   CNN.   18 May 2017.  https://www.cnn.com/2017/05/17/politics/who-is-robert-mueller/index.html.   Accessed 21 May 2018. Ruiz, Rebecca and Mark Landler.   â€Å"Robert Mueller, Former F.B.I. Director, Is Named Special Counsel for Russia Investigation.†Ã‚   NYTimes.com.   17 May 2017.   https://www.nytimes.com/2017/05/17/us/politics/robert-mueller-special-counsel-russia-investigation.html.   Accessed 21 May 2018. Closing After reading this Mueller Investigation essay, you have a basic understanding of what the Mueller Investigation is, who Robert Mueller is, why he is investigating Russian interference in the 2016 election, and the results of the investigation as of May 21, 2018.   Because this is an ongoing story, you should be sure to check additional news sources for the latest information about the investigation.   In addition to providing you with a factual background of the Mueller Investigation, this essay has also served as a template for an academic essay.   It has shown you the appropriate structure for an expository essay, how to use quotations and cite sources, and how to structure a works cited page.   If you have any additional questions about academic essay structure, one of our professional writers or editors would be happy to help you. View or Download this full document in (.docx) format. --> Open Full Document Open full document and source list OR Order A Custom Written Essay Order a one-of-a-kind custom essay on this topic

Sunday, June 7, 2020

Buy Law Research Paper and Get Qualified Writing Help

Buy Law Research Paper and Get Qualified Writing Help Buy Law Research Paper and Get Qualified Writing Help Law is one of the most interesting fields of study. But is this subject research paper writing as captivating as the discipline itself? Not always. Of course, it depends on a student and his/her character. But most often law students prefer to spend their time learning something new rather than conducting research. And that’s completely understandable! If you like thousands of other law students don’t wish to spend days and nights of your priceless time on conducting the research, you are welcome to buy law research paper as well as any other like literature research paper from.com! Law Research Paper Writing Tips Despite having fixed rules and regulations for research paper writing, there are many mistakes students tend to repeat year after year. Most of them are those related to formatting and general organization of the paper. Even if your text seems flawless to you, bad layout can ruin everything. Here are a few tips on what you should not do when completing a research paper in law: Have several research paper topics in your mind, discuss them with your advisor and choose the best one. Each table, chart or graph should be followed by an explanation. Don’t expect that all your readers are professionals in your field and understand all the abbreviations you use. Use professional language. Slang is an absolute taboo for this discipline research paper. Stick to your topic, don’t overload your paper with unnecessary and irrelevant information. Present information in logical order, don’t ‘jump’ from one aspect of the problem to another. Make sure the approach you utilize for the research applies to your particular topic. Create your schedule and follow it. This will help you avoid procrastination and manage your time.Remember to Proofread Your Business Law Research Paper! The fact that you’ve got to the point where you need to write a research paper by default means that you are a literate person. That’s why spelling and grammar mistakes are unacceptable in your paper! Have your research paper proofread before submitting it. There are two ways to do it. You may ask your friend (preferably a grammar or spelling guru) to read your paper and fix inadequacies. Another way is to ask.com for academic assistance. We not only write but also proofread papers! This way you may be sure that not only your spelling and grammar will be perfect, but also that your text will be readable. Buy Law Research Paper and Forget Your Worries! Time flies with skyrocket speed nowadays! And, of course, you don’t want to spend countless hours in front of your computer screen or at the library when there is so much exciting stuff going on around you. This is where.com comes in to save the day! With us, you may be sure to get an excellent piece of writing for a reasonable price! Our writers are professionals in their field, they have written thousands of papers! That’s why you may fearlessly entrust your college assignment to us! Just think how great it to have your paper written by a professional! Order a paper today, and we will take care of your academic struggles!

Monday, May 18, 2020

Essay on Analysis of Three Republican Presidential Candidates

The United States economy has always had its ups and downs like the rest of the world’s economies. Since December 2007 it has been in a recession, and it has not been able to come out of it yet; with the passage of time, we sink even more. Our current president Barack Obama has certain strategies to take us out of this, even though according to economists that recession ended since June 2009, the economy is growing too slow and the number of unemployed people is outrageous. For the 2012 elections many Republican candidates are trying to formulate new plans that differ from those that Obama has worked out, all these are strategies to get elected as presidents or in Obama’s case, reelected. Ron Paul has been part of the United States†¦show more content†¦Another of his proposals is to eliminate or audit the Federal Reserve because it prints currency daily which causes the dollar value to go down. He also plans on cutting Federal spending because the Republicans a nd Democrats do not seem to reach an agreement on whether or not to raise the debt ceiling. He is not in favor of the U.S defaulting on its fourteen trillion debt because that will make the debt held by other countries depreciate in value. His proposals are very tempting for businesses. He does not want government intervention and making all these regulations will make it better for the people to demand because they will have more income to spend, and the small businesses and large corporations will have enough revenues to keep supplying more products for everyone. His proposal to increase tariffs on imported goods will hurt the U.S trade with other countries. His offers are designed to help the economy and to take it to a state where it can be healthy again and grow. Ron believes in having a balanced budget to have a strong and safe economy. Rick Perry has been Texas’ governor the longest after being reelected twice. He is concentrating on creating new jobs and preserving social values. He has a record of created jobs in Texas. His plan is to make the economy better basedShow MoreRelatedThe Presidential Debate On Politics Essay1140 Words   |  5 PagesI should preface this essay with the fact that I tend to lean Republican in my political beliefs but I think that our current political debates are broken. A constructive national debate is something that is quite important to the functioning of the American system of democracy. Civil discussions and disagreements have been what fuels progress in this country. Now, at a time of heightened awareness from many American people, the political debates in this country don’t seem to be providing them withRead MoreThe Presidential Debate On The American System Of Democracy Essay1616 Words   |  7 Pagesgood cogent arguments. Instead they are filled with fallacies and many falsehoods. In this essay I argue that the presidential debate system is currently not living up to its potential, and I will foc us specifically on Republican primary debate that took place at the Reagan Library in Simi Valley, California. In doing so, I will argue that the main flaws in this cycle’s presidential primary debates were the amount of fallacies used, as structure used as well as provide some counter-arguments to myRead MoreVoter Mobilization And Party Affliction For Women Essay1339 Words   |  6 Pagesfor this privilege. However, in this current day are women still motivate and women are still an easy group to mobilize? Logically the answer would be yes since hundreds of suffragettes fought for this fundamental right, but in the most recent presidential election about only 55.6% of all Americans exercised their right to vote. In the United States gender influences a multitude of different experiences, decisions and affiliations many would think more would go out and vote. Throughout this paperRead MoreThe Presidential Election Of 2016 Essay1422 Words   |  6 Pages The presidential ele ction of 2016 shocked people across America after Donald Trump won the election. Many people questioned how such a candidate could run for office, much less hold one of the most powerful positions on Earth. In attempting to uncover how Trump could be victorious in an electoral race against Hillary Clinton, this paper will analyze four key factors in a general election: the fundamentals, campaigns, the media, and the voters. Each factor provides insight on how Trump was able toRead MorePolitical Election Survey : Presidential Election Surveys Essay1148 Words   |  5 Pages2016 Political Election Survey Out of the last decade of presidential elections, the 2016 election may very well be the most prominent election to view peak polarization between presidential nominees. Every year as technology advances, the impact of social media raises, citizens who are older, that are used to print newspapers, listening to the radio, are being socially waned off of their preferential choices of media. Even televised news stations are showcasing conservative or liberal preferentialRead MoreThe Republican Party And Bernie Sanders1513 Words   |  7 PagesThere are two Democrats and there are three Republicans remaining in the 2016 presidential candidate race. Hillary Clinton and Bernie Sanders are the only two Democrats running in the 2016 presidential election. Ted Cruz, John Kasich, and Donald Trump are the only three Republicans running in the 2016 presidential election, but the two candidates I had were, Bernie Sanders and Gary Johnson. Bernie Sanders is a part of the Democratic Party and Be rnie Sanders is a part of the Libertarian Party. BernieRead MoreEssay On Election800 Words   |  4 Pages For my election analysis paper I choose to pick White evangelical Christians, Blacks, and Males. I chose these demographic groups because i thought that they influence the election the most out of all the others. I also think that they are the largest target groups for the presidential candidates to appease to. In the last election we can see how significant these three different demographic groups changed the course of the election. For the evangelical Christians there is no data on the 2000Read MoreThe Election Cycle Of The Swing State1620 Words   |  7 Pagessimple analyses such as incumbency advantage may not fully explain the results and polls in the race. Patrick Murphy, Democrat, is a congressman challenging Marco Rubio’s Florida Senate seat. Marco Rubio is advantaged, both by incumbency and his presidential campaign. As an incumbent, Rubio should have an advantage over Murphy for numerous reasons. John Sides et al. succinctly explains how incumbents have more campaign and political experience, more resources, and wide name recognition (Sides et alRead MoreThe President Of The United States1532 Words   |  7 PagesRand Paul, the son of famous libertarian Ron Paul presidential candidate, and Kentucky senator began running for the position of president of the United States of America, on April 7th 2015 under the Republican Party. He ran under the slogan Defeat the Washington machine. Unleash the American dream, and promised to be a non-establishment Republican president. Rand Pal is by far the best candidate for the presidency in all fields, but most specifically, for our economy, our foreign affairs, and ourRead MoreGeorgia C ase Analysis1682 Words   |  7 Pagessouthern neighbors. The state succeeded during the Civil War and the election of Abraham Lincoln in 1860. During the period of Reconstruction, the state was forced to have two Republican Governors, but once regular elections returned in 1872 the state has seen an unprecedent streak of democratic governors. Not until 2003 did a Republican Governor reign over the state since Reconstruction. While many other southern states went through similar long periods of near one-party rule; Georgia’s length is unique

Sunday, May 17, 2020

Data Chirp Internet - Free Essay Example

Sample details Pages: 29 Words: 8658 Downloads: 3 Date added: 2017/06/26 Category Statistics Essay Did you like this example? Development of a Data Chirp Measurement Method Don’t waste time! Our writers will create an original "Data Chirp Internet" essay for you Create order Information and Communication Technology (ICT) In measuring data link quality for downloading and browsing services the final perceived quality by the user is determined by the session time. Measuring session times is time consuming because for each context, e.g. a small file download, a large file download or a browse session over a number of small Internet pages, one has to measure these times. Therefore data links are mostly characterized in terms of key performance indicators (kpis) like bandwidth, loss, delay etc, from which one tries to derive the session times. However the relations between these kpis and the session times are not clear and also measuring these kpis is very difficult. This report presents a method that characterizes a data link by a so-called delay finger print from which the session times can be predicted. The fingerprint is derived from a concatenated lumped UDP packet transfer (packets send immediately after each other) followed by a UDP stream of which the sending speed is increased continuously using a single packet transfer. This chirping approach causes self induced congestion and allows fingerprinting with only minimal loading of the system under test. In this contribution live networks as well as an internet simulator are used to create data links over a wide variety of conditions where both the data chirp fingerprint as well as the download/session times are measured. From the fingerprint a number of link indicators are derived that characterize the link in terms of kpis such as ping time, loaded bandwidth (congested), unloaded bandwidth (not congested), random packet loss etc. The fingerprint measurements allow predicting the service download/session times for small downloads and fast browsing with a correlation of around 0.92 for the simulated links. For large file downloads and large browse sessions no acceptable prediction model could be constructed. Introduction Multimedia applications are playing an important role in recent years in everyday life. On the Internet, apart from the widely used Hypertext Transfer protocol (HTTP), many Real Time applications are contributing significantly to the overall traffic load of the network. The state-of-the-art Information and Communication Technology (ICT) concepts enable deployment of various service and applications in various arenas like home entertainment, offices, operations, banking etc. The backbone of all these distributed services is the core network which facilitates data communication. The quality of the available network connections will often have a large impact on the performance of distributed applications. For example, response time of the requested document using World Wide Web crucially depends on network congestion. In general if we want to quantify the quality of a data link from the user point of view we can use two approaches: 1) A glass box approach in which we know all the system parameters for the network (maximum throughput, loss, delay, buffering) and the application (TCP stack parameters) and then use a model to predict download / session times and UDP throughput. 2) A black box approach where we characterize the system under test with a test signal and derive a set of black box indicators from the output. From these link indicators the download / session times, or other relevant kpis are predicted. The first approach is taken in draft recommendation [1]. This report investigates the second approach. In most of the black box approaches estimation is made of the available bandwidth which is an important indicator for predicting the download and session times. Several approaches exist that allow for bandwidth estimation but bandwidth is not the only important link parameter. For small browsing sessions end-to-end delay and round trip time are also key indicators that determine the session time and thus the perceived quality. A good black box measurement method should not quantify kpis but should be able to predict download and session times for download and browse services that run over the link. Data Links In telecommunication a data link is used to connect one location to another for the purpose of transmitting and receiving the data. It can also be an assemble, consisting of parts of two data terminal equipments (DTEs) and the interconnecting data circuit that is controlled by a link protocol enabling data to be transferred from a data source to a data sink. Two systems can communicate with each other using an intermediate data link which connects both of them. The data link can be made up of a huge number of elements that all contribute to the final perceived quality. In real world connections cross traffic will always have an impact on the download and session times making it difficult to use them in the development of a data link quality model. Therefore in this report most of the model development is carried out using simulated links. The setup was established at TNO-ICT Delft, The Netherlands. A Linux system was used with a network card that has two interfaces to emulate a particular network. Outline In this report chapter 2 describes the problem definition mentioning the tasks that are performed in the project. Chapter 3 explains various key performance indicators that quantify the data link performance. The measurement approach employed and principle behind the chirp is described in chapter 4. The experiment setup at TNO-ICT is described in the chapter 5. In chapter 6 kpis implementation is discussed. In chapter 7 mapping between chirp and service characteristics are discussed. Chapter 8 is conclusion. Some of the measurement results are discussed in Appendix A. The management of the project can be found out in the Appendix B. Problem Definition An operator has to know how to set the network parameters in order to deliver the most appropriate end-to-end quality, based on the network KPIs, the service characteristics and the characteristics of the end user equipment. A fast, efficient method for assessing the impact of a network setting on the final perceived download and session times is thus of vital importance. Plain optimization of KPIs is not necessarily the best strategy because the final perceived quality is determined by the download and session times. In the ideal case a method should be developed with which instantaneous insight can be created into the performance of all services that are carried via the link under consideration. Such a method can also be used to create applications which can take decisions on resource selection or reservation. For small downloads and fast browsing the ping time of the data link will be the dominating factor. For large downloads the available bandwidth will be the dominating factor. The TCP stack parameters also have a significant impact on these times as they determine the slow start behavior. For intermediate file sizes the available band width, the un-congested bandwidth will be important most probably in combination with other kpis like packet loss, buffer size, and possible bearer switching mechanisms (UMTS). This report presents a method that characterizes a data link by a so called delay finger print from which a set of kpis is derived. These kpis are then used to predict the service quality in terms of download/session times. The basic idea, taken in a modified way from [3], is to characterize the data link by sending a limited set of UDP packets over the line in such a way that the delay behavior of these packets allow to characterize the link in as many aspects as possible. Two types of characterization are used, the first one uses a lumped set of packets that are send over the line immediately after each other form which the smearing of a single packet can be estimated. This estimation is closely related to the un-congested bandwidth of the link. The second one uses a train of UDP packets that are separated by an ever decreasing time interval, resulting in a so called data chirp from which the available bandwidth can be estimated. Key Performance Indicators In this project we will focus on the estimation of some kpis from which the quality of a data link can be determined. We will see various data link performance indicators that are dominant in their impact on the end to end session time. Ping Time Ping Time or Round-Trip Time (RTT) is the amount of time that it takes for a packet to go from one computer to another and for the acknowledgment to be returned. In the case of links that span across long distances, the RTT is relatively large, which directly affects the browse and download times. Available Bandwidth Available bandwidth (AB) is the approximate transfer rate that an application can get from a connection in presence of cross traffic load. Measuring the available bandwidth, is of great importance for predicting the end-to-end performance of applications, for dynamic path selection and traffic engineering, and for selecting between numbers of differentiated classes of service [4]. The end-to-end available bandwidth between client and server is determined by the link with minimum unused capacity (referred as tight link). In Figure 3.1 the end-to-end available bandwidth is determined by the minimum unused capacity, indicates is A. Figure 3.1 The available bandwidth determined by tight link unused capacity. Several applications need to know the bandwidth characteristics of the underlying network paths. This is similar to the situation where some peer-to-peer applications need to consider available bandwidth before allowing candidate peers to join the network. Overlay networks can configure their routing table based on the available bandwidth of the overlay links. Network providers lease links to customers and the charge is usually based on the available bandwidth that is provided. Available bandwidth is also a key concept in congestion avoidance algorithms and intelligent routing systems. Techniques for estimating available bandwidth fall into two broad categories: passive and active measurement. Passive measurement is performed by observing existing traffic without perturbing the network. It processes the load on the link and requires access to all intermediary nodes in the network path to extract end to-end information [6]. Active measurement on the other hand, directly probes network proprieties by generating the traffic needed to make the measurement. Despite the fact that active techniques inject additional traffic on the network path; it is more suitable to use active probing measurement in order to measure end-to-end available bandwidth. In communication networks, high available bandwidth is useful because it supports high volume data transfers, short latencies and high rates of successfully established connections. Obtaining an accurate measurement of this metric can be crucial to effective deployment of QoS services in a network and can greatly enhance different network applications and technologies. Un-congested Bandwidth The term un-congested bandwidth (UB) refers to the maximum transfer rate available for a particular connection in absence of other traffic (clean link). For a connection it is hard to achieve transfer rate equal to UB because of various facts like random packet loss, TCP slow start mechanism. The UB is limited by the bottleneck link capacity. The un-congested bandwidthof a link is determined by the link with the minimum capacity (termed as bottleneck link). In Figure 3.2, the un-congested bandwidth of the link between the client and server is C = C1, where C1, C2, C3 are the capacities of the individual link and C1 C3 C2. Figure 3.2 The un-congested bandwidth determined by bottleneck link capacity. Packet Loss Packet loss can be caused by a number of factors, including signal degradation over the network medium, oversaturated network links, corrupted packets rejected in-transit, faulty networking hardware, or normal routing routines. The available bandwidth decreases with increasing packet loss. In this project we will be observing two type of packet losses, i.e., random packet loss and congestion packet loss. These two types of losses are discussed in next chapter. In the next chapter we go into the details of how to measure these kpis. Key Performance Indicator Measurement Approach In this chapter we will discuss how the key performance indicators as described in chapter 3 will be measured. Ping Time Ping is a computer network tool used to test whether a particular host is reachable across an IP network. It works by sending ICMP echo request packets to the target host and listening for ICMP echo response replies. Ping estimates the round-trip time, generally in milliseconds, and records any packet loss, and prints a statistical summary when finished. The standard ping tool in Windows XP was used to determine the ping time. Available Bandwidth Estimation using a UDP Chirp The data chirp method is a method to characterize the end-to-end quality of a data link in terms of a delay fingerprint. Using the data chirp method, a train of UDP packets is sent over the data link with an exponentially decreasing time interval between the subsequent packets. Such a train of packets is referred to as a data chirp. From the delay pattern at the receiving side one can determine characteristic features of the data link such as bandwidth, packet loss, congestion behavior, etc. From the characteristic features one can then try to estimate the service quality of different services that run over the data link. In the classical data chirp [3] the time interval between two consecutive packets m and packet m+1, Tm is given by: Tm = T0 m, 0 1, where T0 is the time interval between the first two packets. The factor (1) determines how fast the interval between subsequent packets in the data chirp decreases. As a result of this decrease, the instantaneous data rate during the data chirp increases. The instantaneous data rate at packet m, Rm, is given by: Rm = P / Tm[bytes/sec] where P is the size of a UDP packet in the chirp. A data chirp is illustrated in [3] and shown in Figure 4.1, consisting of individual packets sent over the link with reduced interval. Figure 4.1 Illustration of a data chirp. The delay of the UDP packets in the data chirp after traveling over the data link is determined relative to the delay of the first packet. The resulting delay pattern, where the relative delay per UDP packet is shown as a function of the packet number, is referred to as data chirp fingerprint. A typical data chirp finger print for a fixed bandwidth 64 kbps bit pipe without cross traffic is shown in Figure 4.2 Figure 4.2 Data chirp fingerprint for a fixed bandwidth bit pipe of 64 kbps. From such a data chirp fingerprint a number of parameters can be determined, including Available Bandwidth, Random packet Loss, Congested Packet Loss and Un-congested Bandwidth [5] In the chirp packets are send individually over the line but with a continuously decreasing sending time quantified by a factor (1) resulting in an inter sending time Tm for the mth packet. The combination of T and chirp size N determines the lower and upper bound of the throughput of the UDP stream. It is clear that at the start of the chirp packets should be send over the link with large enough time intervals in order to be able to characterize low bandwidth systems. Furthermore the inter-sending time should decrease to a value so low that the required bandwidth is higher than the maximum expected available bandwidth. Finally a small allows for a fast characterization of the link while a near 1 allows more accurate, but more time consuming, bandwidth estimations. After some initial experiments the values of the chirp were set to P = 1500 byte, T = 200 ms, = 0.99 and N = 400, resulting in a lower and upper input into the system of 64 and 5000 kbit/s respectively. This chirp provides a well balanced compromise between measurements. Figure 4.5 gives an overview of this approach. Figure 4.3 Data chirp, send to estimate AB, using the idea of self induced congestion with ever smaller inter sending times. This approach was first tested over the virtual tunnel interface running over the Linux machine. When this chirp was put over a clean link (i.e. over a tunnel with no cross traffic) the estimation of available bandwidth is a bit higher then the actual bandwidth set by the netem GUI. This is due to the buffers present in between which pass the chirp over the link with a higher speed. Due to this factor the estimation is about 20% higher then the actual link speed. The second problem faced in this scheme was that when we send the chirp over this link with a cross traffic the finger print we get from the chirp was not good enough to get a correct estimation. The reason behind this is that the chirp tries to push in through the TCP cross traffic and the time it is successful there is a high packet loss due to which we cant make proper estimation. So we send this chirp repeatedly (4 times) with inter sending time of 5 seconds. We can estimate the available bandwidth using: Rm = P / Tm, [bytes/sec] where P is the packet size and T is the average interval at that time. Un-congested Bandwidth In principle, un-congested bandwidth can be estimated from the smearing of a single packet. Even in the case that there is cross traffic on a data link and we would like to estimate the bandwidth for the clean situation we can use the smearing time. In a congested link a single packet is still smeared according to the un-congested bandwidth. However, obtaining the smearing time of a single packet is difficult to achieve with normal hardware equipment. Therefore packets are sent in pairs as close as possible (back-to-back) after each other. This allows to assess smearing time with normal hardware because we can now measure the receive time stamps of each packet and deduce the smearing form this. This method will only work for the situation where the chance that cross traffic will be sent over the data link between the packets is minimal. Figure 4.4 illustrates the packet pair smearing measurement method. P2 P1 Figure 4.4 The use of packet pairs for the determination of the un-congested bandwidth. Tr is the time when the first bit of packet starts to arrive. Tr is the time when last bit of the packet is received. Ts is the time when the first bit of packet is set on the link. As illustrated in Figure 4.4, the packets leaving the data link are smeared compared to the original packets, indicated as T. This T is determined from the time interval between the arrival of the first and second packet in the pair. From this smearing, the un-congested bandwidth UB can be estimated using: UB = P2 / T, [bytes/sec] where P2 is the size of the second packet, in bytes. To estimate the un-congested bandwidth we implemented the method which is described above, i.e., sending packet pairs over the link to estimate un-congested bandwidth. However, buffering can cause measurement problems; when data is stored and forwarded the link speeds preceding the buffer are no longer taken into account in the un-congested bandwidth estimation. This can for a major part be solved by using a lumped set of packets that vary between 1 and N concatenated packets. Packets in lumps can no longer be stored in the small buffers of the link. In the current proposed measurement method we start with a single packet and then concatenate packets till a lump of seven packets from which point the seven lumps is repeated. The reason for not using more packets in the lump was because of the underlying Windows mechanism that does not allow sending more then seven packets. If we increase it then from the first lump of more then seven random packet loss occurs. The lumps are sent using a chirp like approach as given in section 4.2. The length of this series is dependent on the experimental context and an optimal choice is somewhere between 20 and 50 lumped packets for links with a speed between 100 and 10,000 kbit/s. In the final chirp P is set to 1500 byte, the start interval is set to 200 ms with a = 0.97. With N=50 this choice is a compromise between a wide range of speeds that can be assessed, measurement time and measurement accuracy. In most cases these concatenated packets will be handled immediately after each other by all routers and from the so-called packet smearing times a data link characterization is made that has high correlation with the un-congested bandwidth of the link. This bandwidth estimation is always higher than the available bandwidth, since availability is influenced by possible cross traffic on the data link. Test results obtained for un-congested bandwidth are presented in chapter 7. Figure 4.5 provides an overview of extended chirp. Figure 4.5 Extended data chirp using the idea of measuring the smearing times of concatenated packets. By measuring receive time stamps the smearing of a packet can be measured when two or more concatenated packets are sent over the link. For this approach the Un-congested bandwidth can be determined by using: UB = P / T . [bytes/sec] In this lumped chirp version, P is the size of the lump and T is the time difference between the first and the last packet in the lump. Random Packet loss The Random Packet Loss is determined from the packets before the bending point. In theory before this point no packet loss should occur and by checking whether packets have been lost during the transmission of these first packets the random packet loss can be determined. Congested Packet loss At a specific point all buffers are filled to their maximum and then the delay per packet cannot increase anymore because of this buffering. This is the point where packets are being lost due to congestion on the link. This packet loss can be determined from the chirp fingerprint. Experimental Setup The setup was established at TNO-ICT Delft, The Netherlands. The Linux system with the network card which has two interfaces is used to emulate network. Figure 5.1 Experimental setup used for the simulations. Software Setup There exists a module called Netem in the Linux kernel which provides functionality for testing protocols by emulating the properties of wide area networks. The current Netem version emulates variable delay, loss, duplication and packet re-ordering. End users have no direct access to the Netem module and Netem can be accessed using traffic control (tc) command. User can directly using tc commands can direct Netem to change hardware interface settings. The GUI for the tc command is developed termed as Netem PHPGUI which can be accessed via web server. Client Application We have developed an application in Borland Delphi which runs on a Windows XP machine. This client application generates the chirp pattern. Standard TCP/IP stack is used which is present in Windows XP. There are different parameters which can be set through this application like, packet size, interval between the chirps etc. Server Application Application is developed which runs on a machine acting as a server. It also has the same TCP/IP stack which is in Windows XP. The server application dumps the chirp information into the files which are further used to post process and get the kpis out of the information received. A web server also runs on this machine to get the service characteristics for the FTP and browsing. Key Performance Indicator Measurement Implementation As discussed above lumped packet and a single packet data chirp are sent from client to the server. The first lumped chirp pattern sent is used to estimate the un-congested bandwidth of the link. Here instead of sending a single packet, a lump is formed by concatenating several packets and it is sent over the link. After a certain time gap (5 seconds) next chirp pattern consisting of a single packets (unlike first chirp) is sent which is used to get delay signature and the various parameters of the data link associated like available bandwidth, random packet loss, congestion packet loss. The experiments are carried on different link which are emulated using network emulator as mentioned in chapter 5. The sending time and receiving timestamps of each chirp packet is logged and post-processing is done in order to extract the various parameters which are mentioned below. Estimation of Un-congested bandwidth from packet smear times When a packet is sent on a data link it is smeared in time. Smear time is the time difference between the time at which the complete packet is received and epoch at which packet starts arriving. Figure 6.1 depicts the smear times of the packet received at the server side. Figure 6.1 Packet Smear Times. The smear time of the packets is logged into the file. Due to software limitations it is very hard to distinguish between the time packet starts arriving and time packet is completely arrived. Because of this, smearing time is computed from the receiving timestamps of the next and previous packets. It can happen that packets will be dropped which will lead to a wrong estimation of the smear time. Therefore if a packet drop is observed then the smearing time is not computed between two packets of which there exists a packet at transmission but not at the reception. It can also happen that multiple packets are dropped. If there is a packet loss inside a lump of packets, the smearing time is estimated from the maximum number of packets where not a single packet is dropped in between those packets. The logic behind this is depicted in the flowchart as shown in Figure 6.2. Figure 6.2 The un-congested bandwidth calculation from smear times. Estimation of available bandwidth from bending point of chirp signature The difference between the times at which the packet is received and the time at which the packet is sent is termed as delay. The packet sending timestamps is placed in the packet on the client side and received timestamp is estimated at the server side. As client and server both are not time synchronized the time synchronization is applied by subtracting the delay of first packet from all the delays. Such a delay is referred as differential delay. Figure 6.3 Chirp Finger print. Differential delay is used to generate the chirp signature by plotting the estimated differential delays against the packets received. As one can see from Figure 6.3, the differential delay suddenly increases (in this particular case, this happens around the 90th packet of the chirp train). This is the point where the link is completely occupied by the chirp packets and other cross traffic packets if present. At this point the rate at which chirp packets are sent represents the available bandwidth of the data link under consideration. Chirp Behavior versus Service Characteristics The measurement set up used an internet simulator to manipulate the packet loss, buffer size, bandwidth and delay of a single PC to PC network link. We have collected delay finger print data related to the chirp and the session time that quantify service quality. The following parameters were used for these experiments: Packet loss between 0 and 10% in the downlink. Buffer size in the downlink between 1 and 30 number packets. Bandwidth in the downlink between 64k and 1M bit/s. Delay in the up and downlink between 1 and 300 ms. From the large set of possible combinations a subset of conditions were chosen. In each condition six measurements were carried out: A HTTP browsing session time measurement using three files in the following time line: empty cache, start browse, 2 kByte download, 10 kByte download, 70 kByte download, end browse (browse small, medium, large, respectively) A HTTP download of a 4 kByte file (download small) A HTTP download of a 128 kByte file (download medium) A HTTP download of a 4000 kByte file (download large) A ping round trip time measurement A data chirp measurement using the pattern as described in chapter 4. A standard Windows XP TCP/IP stack was used and for some conditions the system showed bifurcation behavior. This can be expected since an acknowledge can be received just in time or just too late depending on infinite small changes in the system. In all cases where this behavior was found the minimum download/session time was used in the analysis. Experiments were performed under different data link scenarios by changing buffer sizes, packet loss, and delay, with and without competing cross traffic. Several chirp characteristics were used in order to fend the optimum settings. When a particular data link is considered for the experiment, above mentioned service characteristic parameters were measured (session times) and later on the data link with same conditions, a data chirp is sent and the data link key performance indicators were computed from the chirp signature. The experimental observations are discussed in Appendix A. The correlation between the link capacity and the un-congested bandwidth estimations were excellent, the correlation was 0.99 (see Figure 7.1). Figure 7.1 Un-congested bandwidth estimation. The correlations between the service characteristics (browsing/download times) and kpis estimated from chirp delay pattern showed lower correlations. The results show that the small browsing session times are dominated by the ping time and the un-congested bandwidth. Figure 7.2 and 7.3 show the relationship between the measured small browsing session times, the small FTP download times and the best two dimensional predictor that could be constructed from the ping time and the un-congested bandwidth. This predictor is the best kpi that can be constructed and shows a correlation of 0.92 for the small browsing data and 0.98 for the small download data. Figure 7.2 Small browsing session. Figure 7.3 Small FTP download. For medium and large browsing/downloading it was not possible to fit to any combination of kpis (up to three dimensions) that had a satisfactory correlation (0.9). The available bandwidth gives the highest correlation for these measurements, around 0.7. In the case of clean data links, the correlation between available bandwidth and the link capacity was found to be 0.93. In the case of one TCP cross traffic stream, the available bandwidth estimation did not show an acceptable correlation with the data link capacity. Conclusion In this project a black box measurement approach for assessing the perceived quality of data links is implemented. This quality is defined as the measured browsing and download session times. The measurement method uses the concept of a data chirp. In general a data chirp puts data on a link with ever increasing sending speed. The delay behavior of the packets is then used to characterize the link. The chirp is implemented in two different ways, the first one uses a set of lumped packets from which the un-congested bandwidth is estimated, the second uses a set of single packets from which the available bandwidth is estimated. Together with the ping time this allows a full characterization of the data link. From the data link characterization a prediction model for the session times is constructed. The model shows a correlation of 0.98 for the small download data set and of 0.92 for the browsing data set over a number of small pages. The model uses a two-dimensional regression fit derived from ping time and the un-congested bandwidth. For medium and large browsing/downloading it was not possible to fit to any combination of kpis (up to three dimensions) that had a satisfactory correlation (0.9). The available bandwidth gives the highest correlation for these measurements, around 0.7. Besides the session times the model also allows estimating the link capacity. The correlation between the real link capacity, as constructed with the network simulator and the chirp estimated un-congested bandwidth was 0.99. List of Acronyms ADSL Asymmetric Digital Subscriber Line FTP File Transfer Protocol GSM Global System for Mobile communication GPRS General Packet Radio Service ITU International Telecommunications Union LAN Local Area Network PSTN Public Switched Telephone Network TCP Transmission Control protocol TNO Nederlandse Organisatie voor Toegepast Natuurwetenschappelijk Onderzoek (Netherlands Organisation for Applied Scientific Research) WIFI Wireless Fidelity UDP User Datagram Protocol UMTS Universal Mobile Telecommunications System References [1] ITU-T E.800: Recommendation E.800 (08/94) Quality of service and dependability vocabulary. [2] M. Jain, C. Dovrolis, Pathload: a Measurement Tool for Available Bandwidth Estimation, Proc. PAM02, 2002. [3] V. J. Ribeiro, R. H. Riedi, R. G. Baraniuk, J. Navratil, L. Cottrell, pathChirp: Efficient Available Bandwidth Estimation for Network Paths, Paper 3824, Passive and Active Workshop, April 2003, La Jolla, California USA. [4] R.Prasad, M. .Murray, C. Dovrolis, K. Claffy Bandwidth Estimation: Metrics,Measurement Techniques, and Tools, IEEE Network, November-December 2003 issue. [5] TNO Report: Gap Analysis Circuit Switched Versus Packet Switched Voice and Video Telephony [6] ITU-T Rec. G.1030, Estimating end-to-end performance in IP networks for data applications, International Telecommunication Union, Geneva, Switzerland (2005 November). [7] Ahmed Ait Ali, Fabien Michaut, Francis Lepage: End-to-End Available Bandwidth Measurement Tools:A Comparative Evaluation of Performances,CRAN (Centre de Recherche en Automatique de Nancy)UMR-CNRS 7039 Appendix A Measurement Results Lumped Chirp Tests Lumped chirp test with cross traffic 5% Loss. In the first test a TCP cross traffic stream was generated by sending 50 files of size 1MB each in a loop in such way that the slow start does not become active again. A loss of 5% was set for this situation, after analyzing the behavior of TCP with ethereal we send the extended chirp over the link to estimate the un-congested bandwidth. From the graph we can see that in the start the estimations are quite high, this behavior is totally dependant on the state of TCP mechanism rather the stream is in the stable state or it is still in slow start. Following parameters were set for this experiment: Number of Lumps: 50 Maximum lump size: 7 packets Packet size: 1500 bytes Interval: 100 ms Alpha: 0.97 Links Speed: 1000 Kb/s Loss: 5 % Cross Traffic: TCP Cross traffic 50 files of 1Mb. Estimated Bandwidth versus Packet Lumps Figure.1 Results achieved by sending the extended chirp over a link of speed 1000Kb/s. The average estimated bandwidth estimated over the time interval is 1030Kb/s. Due to the loss set for the test we can easily judge that the number of observations are reduced to 34 (ideal link 45). This higher bandwidth estimation is because of the higher bandwidth calculated with the small lump sizes. This is due to the buffers present in the link which avoids the smearing effect on the packets. We have also tested the same scheme with different scenarios. The average estimated bandwidth in the case of no cross traffic or with a loss over the link is a bit higher then the actual fixed bandwidth. Just in the case of loss we get less number of estimations as loss in packet cause a drop of reading. Lumped chirp test Real links. Test 1: TNO Internal Network For this test the extended chirp was sent over the network in TNO. In this case the only thing which cane be tweaked are the parameter related to the chirp. The other factors, we do not have any idea what policies are implemented over TNO network. Following settings were used for this test. Number of Lumps: 50 Maximum lump size: 7 packets Packet size: 1500 bytes Interval: 100 ms Alpha: 0.97 Links Speed: Unknown Loss: Unknown Cross Traffic: Unknown. Estimated Bandwidth versus Packet Lumps Figure.2 Results achieved by sending the extended chirp over TNO Network. From the figure.11.2 we can see that there is a packet loss over the network as the observations for lumps are 30 (Under ideal clean link 45), and the estimation drops due to the factor of self induced congestion at higher rate. The averaged estimated un-congested bandwidth calculated over the time is 2.10 Mbs. From the figure above one can judge that there are policies imposed in between so if a certain stream of data tries to eat a bit higher part of bandwidth the router may not allow the stream to do so. Test 2: Delft -Eindhoven over the Internet For this test the extended chirp was sent the internet between Delft and Eindhoven. In this case the only thing which cane be tweaked are the parameter related to the chirp. The other factors, we do not have any idea what policies are implemented over TNO network Following parameters were set for this test: Number of Lumps: 50 Maximum lump size: 7 packets Packet size: 1500 bytes Interval: 100 ms Alpha: 0.97 Links Speed: Unknown Loss: Unknown Cross Traffic: Unknown. Estimated Bandwidth versus Packet Lumps Figure.3 Results achieved by sending the extended chirp over a link between Delft and Eindhoven. From the figure 3 we can see that there is a packet loss over the network as the observations for lumps are 32 (Under ideal clean link 45), and the estimation drops due to the factor of self induced congestion at higher rate. The averaged estimated un-congested bandwidth calculated over the time interval is 2.30 Mbs. This experiment shows the behavior same as in the case of TNO network that after some times the router doesnt allow the stream of data to eat up a larger part of the bandwidth instead restricting it to a limit. Data Chirp Tests Data chirp test with cross traffic 5% Loss. In this test a TCP cross traffic was generated over the virtual tunnel link, we wait for a while so that TCP comes out of its slow start; we send the repeated chirp to estimate the available bandwidth of the link. The following settings were used for the experiment: Number of Packets: 400 Packet size: 1500 bytes Interval: 200 ms Alpha: 0.99 Links Speed: 1000k Loss: 5% Cross Traffic: TCP Cross traffic 50 files of 5 Mb. Differential delays versus Number of Packets Figure.4. Data chirp over data link of 1000k with TCP cross traffic. The estimated available bandwidth achieved through the chirp was 839 kb/s. This behavior can be justified in a sense that when ever there is a packet loss TCP tries to re-adjust itself and releases some part of the utilizing bandwidth which is not in the case of UDP so what ever bandwidth is available UDP tries to eat it up. Data chirp test Real links. Test 1: TNO Internal Network In this experiment the repeated data chirp was sent over the network in TNO so we can only adjust the parameter related to the chirp, the other factors we do not have any idea what policies are implemented over TNO network. The following settings were used for the test: Number of Packets: 400 Packet size: 1500 bytes Interval: 200 ms Alpha: 0.99 Links Speed: Unknown Loss: Unknown Cross Traffic: Unknown. Differential Delays versus Number of Packets Figure.5. Data chirp over data link of TNO. From the figure.5 we can see that there is a packet loss over the network as the observations for number of packets is less then 400 (Under ideal clean link 400 with out loss), and the finger print does not look smooth. There are quite few excursions in the fingerprint which can be due to the underlying buffers in between the links. The available bandwidth estimated from this finger print is 491.92 kb/s. Test 2: Delft -Eindhoven over the Internet In this experiment the repeated data chirp was sent over the internet between Delft and Eindhoven. We can only tweak the parameter related to the chirp, the other factors we do not know what policies are over the internet. Number of Packets: 400 Packet size: 1500 bytes Interval: 200 ms Alpha: 0.99 Links Speed: Unknown Loss: Unknown Cross Traffic: Unknown Differential Delays versus Number of Packets Figure. 6. Data chirp over internet between Delft and Eindhoven. From the figure.6 we can see that there is a packet loss over the internet as the observations for number of packets is less then 400 (Under ideal clean link 400 with out loss), that could be due to congestion or any other factor in between. The available bandwidth obtained from this finger print is 706.60 kb/s. There are excursions present which could be due to the buffers present in the link. Performance parameters like random packet loss and congestion packet loss are calculated through the observations which are received at the receiver side. Random packet loss is calculated before the bending point and congestion packet loss is calculated after the bending point. In the following section analysis of different scenarios are discussed. First we examined a clean link with parameters: Interval 200ms, loss 5% and delay 10ms.Delay 10ms represents a small simulated network. We run the tests with varying bandwidth, loss and delay. Figure 7. Data chirp test over different links. Figure.7 shows the out come of data chirp sent over a clean simulated link with bandwidth set to 64kbps, 256 kbps, 1000 kbps and 5000kbps respectively. In above experiment we did not have any idea about the under lying buffers. After post processing we were able to extract all the kpis from the above signatures but the only thing which is noticeable here is in the figure 7(a) that the bending point comes too early rather in the start of the signature which is not good enough, one cant judge the link with the behavior of just one or two packets. This behavior is seen only in the link of 64kbps alternative to this is to increase the interval time but that doesnt give a correct estimation of the available bandwidth. We use the same scheme as mentioned earlier to estimate the un-congested bandwidth i.e. over the lump of seven packets. In the following case we put a TCP cross stream i.e. a large file download over the links to test the developed method. Figure. 8. Data chirp test over links with one TCP cross traffic. Figure.8 shows the out come of data chirp sent over simulated link with bandwidth set to 64kbps, 256kbps, 1000kbps and 5000kbps respectively and one TCP cross traffic stream. We can easily observe in the above figure that there is oscillation type behavior present. This is due to the presence of the cross traffic and underlying buffers in the link. Under this situation its not easy to extract the kpis easily. Even in the case of un-congested bandwidth the method did not works as due to the cross traffic and packet loss we were not able to receive the whole lump intact. We then modified the approach to get the un-congested as well as the available bandwidth. For Un-congested bandwidth the method was modified in a sense that instead of calculating the bandwidth over the whole lump we first find the maximum lump received and calculate the bandwidth from it and then from the rest of the readings. There was no change in the parameters for the estimation of un-congested bandwidth. This gives us a good estimation of un-congested bandwidth. To get an estimation of the available bandwidth we tweaked some of the parameter for the data chirp inter sending time between lumped chirp and data chirp was set to five seconds, Interval was set to 500ms and alpha to 0.98. Figure.9. Data chirp with Interval 500ms and alpha 0.98 over clean links. Figure. 10. Data chirp with Interval 500ms and alpha 0.98 with one TCP cross traffic. Figure 9 and 10 shows the finger print of the chirp after tweaking the parameters. It is clear from the figures that the finger print is quite clean with no oscillations present. But the problem caused by this adjustment is that the estimation achieved after post processing of the data is not accurate. The estimation achieved is quite high even then the actual bandwidth of the link. Although we were able to get the correct estimation of un-congested bandwidth this approach was not taken into consideration. The second approach which we took was smoothing of these finger prints. A window size of 25 is used where differential delays from next 25 packets are averaged and the chirp signature is smoothed. This approach was quite helpful enough in estimating available bandwidth. As those oscillation effects were removed in a good way and we were able to get the estimation correctly. This smoothing was done on the observations received with the following parameters: Interval 200ms, alpha 0.99 and inter-sending time between data chirp five second. Figure.11. Data chirp with Interval 200ms and alpha 0.99 with one TCP cross traffic after smoothing. Figure.11 shows the smoothed finger prints which are shown in figure. 8 With the use of this smoothing technique estimation can be easily done even for the links which have quite small buffers present for all these observations the buffer size was set to thirty. Low buffers will have greater impact on the finger print of the chirp so for that we have to adjust the window size accordingly. Appendix B End-to-end data Link Quality measurement Tool Revision History Version Date Changes 1.0 20.2.2007 Initial version 2.0 25.3.2007 Results. Delimitation added. Results Updated. Phasing plan. Realization Phase added. Control Plan. Time/ Capacity added. Information Updated. Quality Updated. Organisation added. 3.0 05.06.2007 Information Updated Organisation Updated Test Plan Added Introduction The developments in information technology of the last years have led to major advances in high-speed networking, multimedia capabilities for workstations and also distributed multimedia applications. In particular, multimedia applications for computer supported cooperative work have been developed that allow groups of people to exchange information and to collaborate and co-operate joint work. However, existing communication systems do not provide end-to-end guarantees for multipoint communication services which are needed by these applications. In this thesis, communication architecture is described that offers end-to-end performance guarantees in conjunction with flexible multipoint communication services. The architecture is implemented in the Multipoint Communication Framework (MCF) that extends the basic communication services of existing operating systems. It orchestrates endsystem and network resources in order to provide end-to-end performance guarantees. Furthermore, it provides multipoint communication services where participants dynamically join and leave. The communication services are implemented by protocol stacks which form a three layer hierarchy. The topmost layer is called multimedia support layer. It accesses the endsystems multimedia devices. The transport layer implements end-to-end protocol functions that are used to forward multimedia data. The lowest layer is labelled multicast adaptation layer. It interfaces to various networks and provides a multipoint-to-multipoint communication service that is used by the transport layer. Each layer contains a set of modules that implement a single protocol function. Protocol stacks are dynamically composed out of modules. Each protocol uses a single module on each layer. Applications specify their service requirements as Quality of Service (QoS) parameters. The shift from PSTN/GSM/GPRS to ADSL/Cable/WiFi/UMTS technology and the corresponding shift from telephony to multimedia services will have a big impact on how the end-to-end quality as perceived by the customer can be measured, monitored and optimized. For each service (speech / audio / text / picture / video / browsing/ file download) a different perceived quality model is needed in order to be able to predict the customer satisfaction. This project proposal focuses on an integrated approach towards the measurement of the perceived quality of interactive browsing and file downloading over a data link. Results Problem Definition: To place the overall end-to end QoS problem into perspective, it is clear that the emergence and rapid acceptance of Internet and Intranet technologies is providing commercial and military systems with the opportunity to conduct business at reduced costs and greatly increased scales. To take advantage of this opportunity, organizations are becoming, increasingly dependent on large-scale distributed systems that operate in unbounded network environments. As the value of these transactions grows, companies are beginning to seek guarantees of dependability, performance, and efficiency from their distributed application and network service providers. To provide adequate levels of service to customers, companies eventually are going to need levels of assured operations. These capabilities include policy-based prioritization of applications and users competing for system resources; guarantees of levels of provided performance, security, availability, data integrity, and disaster recovery, and adaptivity to changing load and network conditions. The problems which are faced by the networks today is that they dont provide the services which they are made for so the problem here is to set the network parameters in such a way that they give the maximum output and resources are used in a good manner. Project goal: The ultimate goal of the project is to define a measurement method, with which instantaneous insight can be created into the performance of all services that are handled carried via the link under consideration. It is clear that the operator delivering the best portfolio with the best quality for the lowest price will survive in the market. This means that an operator has to know how to set the network parameters in order to deliver the best end-to-end quality, based on the network indicators, the service characteristics and the characteristics of the end user equipment. This optimization will increase the number of new subscribers, leading to an increase in revenues. A fast, efficient method for combined data/streaming quality measurement is thus of vital importance. Results: At the end of the project the deliverables will be: A tool that will be able to test the performance as well as the status of the network. A report that shall accurately describe the processes and explain the choices made. An analysis of the results and a list of recommendations to achieve the best possible results. A presentation of the results to the TNO-ICT and ECR group at TU/e. Delimitation The project is focusing on estimation of available as well as un-congested bandwidth. So testing can only be done on the currently available networks in the company. Since this is a research activity, the approach will be based on the trial and error method. PHASING PLAN Initial phase: Nov2006 Jan 2007 During this time period focus will be given on current work going on in the market and also what kind of tools are available and can be used to develop this new method for measurement of link performance. All background and necessary knowledge will be gathered. Design phase: Feb2007-April2007 Following activities are planed after initial phase: 1) Investigation of hardware timing accuracy in order to improve measurement accuracy (find the best hardware available). 2) Creation of a test set up that allows investigating the effects of cross traffic. 3) Creation of a bearer switching like link simulator to investigate the effects of traffic channel capacity and channel capacity switching. 4) Effect of Packet loss. 5) Buffer impact measurements. The measurements will be based on simulations, results will be plotted and analysed. Preparation phase: May2007 These measurements are focused on the relation between: 1) Browsing/download times and chirp delay pattern. 2) Audio/video streaming throughput and chirp delay pattern. 3) Re-buffering, pause/play, fast forward behaviour with audio/video streaming in relation to chirp behaviour. All above mentioned observations will be made on simulated as well as real networks using the developed tool. Test Phase: June2007-August2007 The activities involved in the realization phase will be: 1) Decide upon the nature of the approach that is going to be used. 2) Generate results (Cross traffic maps). 3) Collect results and create a solid set for the evaluation. 4) Write down in the report the pros and cons of the method. 5) New set of experiments and reevaluation. 6) Discuss results with supervisors at TNO-ICT. 7) Update report with new findings. Control Plan Time/Capacity Norm Data Starting date: 01.11.2006 Completing date: 31.08.2007 Final report: Unknown yet Final presentation: Unknown yet Duration of the phases: Initiation phase: 6 (+/- 1) weeks Design phase: 14 (+/- 2) weeks Preparation phase: 2(+/-1) weeks Realization phase: 14 (+/- 2) weeks Quality: Quality of measurement approach will be discussed with supervisors at TNO-ICT. Information: Phase Output Status Literature Study Study of background research papers Done Project proposal document Done Study documentation for NS-2, TCL, Delphi Done Develop small applications in Delphi to get familiar with the tool. Make TCL scripts for Ns-2 to check behaviour of TCP. Done Experiment Setup Install Fedora 6 on Dell machine Done Recompile Linux kernel with tweaking some kernel parameters Done Install PHP GUI for network emulator Done Develop a Chirp method and implement in UDP server and client using Indy socket library in Delphi. Done Simulation and Measurement Experiment using virtual tunnel network interfaces. Do simulations to observe the behaviour of the developed method. Done Result Analysis Analyse Results achieved from experiments over Tunnel. Analyse Results over real networks. Done Documentation and Presentation First draft version in last week of july. Final version to be handed in second week of Aug. Organization: Progress Control: Frequent meetings will be arranged with the project manager to show progress of the project. Risk Analysis: Slow in implementation (lack of knowledge or un-expected errors) Get help form co-workers in the company Requirement change request from project advisor can lead to delay of designing measurement approach.