Load balancing is a method of distributing workloads across multiple computers, such as a cluster, central processing unit, network links, or disk drives to optimize the resource utilization, maximize throughput, minimize response times and avoid overloading any of the resources. By using multiple components that use load balancing, you can increase reliability and speed through redundancy, rather than using a single component to achieve the desired result. Load balancing is achieved through software or dedicated hardware such as a multi-layer switch or a Domain Name Switch (DNS) server process. Server farms are just one of the many uses that benefit from load balancing. Load balancing allows for a significantly higher level of fault tolerance. When a router learns multiple routes to a specific network via multiple routing protocols, it installs the route with the lowest administrative distance into the routing table. Sometimes the router needs to select a route among many different routes along the same routing path with the same administrative distance. In this case, the router chooses the path with the fewest instances to its destination. Each routing process calculates its routes differently, and you may need to manipulate the routes to achieve the desired load balancing method. Network Load BalancingNetwork load balancing distributes IP traffic to multiple instances of a TCP/IP service, such as a Web server, each running on a host within the cluster. It transparently splits the client request between hosts and allows the client to access the cluster using one or more "virtual" IP addresses. From the client's perspective, the cluster appears to be a single server responding to these client requests. As business traffic increases...... half of paper......y processing speed and memory, a thoughtful ratio or method may be the best option. The default method is called round robin. In this method the connection request is passed on to the next server in line, ultimately passing requests evenly across the cluster. Works in most configurations. The report is transmitted by report designed specifically for a defined set of rules established by the administrator. This allows for a defined distribution of requests specific to the speed and memory of each server. There are other ratio methods called Dynamic Ratio Node and Dynamic Ratio Member that are similar to Ratio, except that the ratio is system-driven and whose values are not static. Weighted methods work best when server capabilities are different and require a weighted distribution of connection requests (University of Tennessee, 2014).
tags