Fair Resource Allocation for Data-Intensive Computing in the Cloud
To address the computing challenge of ’big data’, a number of data-intensive computing frameworks (e.g., Map Reduce, Dryad, Storm and Spark) have emerged and become popular. YARN is a de facto resource management platform that enables these frameworks running together in a shared system. However, we observe that, in cloud computing environment, the fair resource allocation policy implemented in YARN is not suitable because of its memoryless resource allocation fashion leading to violations of a number of good properties in shared computing systems. This paper attempts to address these problems for YARN. Both single level and hierarchical resource allocations are considered. For single-level resource allocation, we propose a novel fair resource allocation mechanism called Long-Term Resource Fairness (LTRF) for such computing. For hierarchical resource allocation, we propose Hierarchical Long-Term Resource Fairness (H-LTRF) by extending LTRF. We show that both LTRF and H-LTRF can address these fairness problems of current resource allocation policy and are thus suitable for cloud computing. Finally, we have developed LTYARN by implementing LTRF and H-LTRF in YARN, and our experiments show that it leads to a better resource fairness than existing fair schedulers of YARN.
EXISTING SYSTEM :
It is based on the observations that
1). different users often have different resource demands;
2). even for an individual user, her demand is changing over time.
Resource sharing can thereby achieve a better utilization than the non-sharing case by enabling overloaded users to utilize unused resources from underloaded users. In the cloud environment, we can establish a multi-tenant computing system by importing all computing instances rented by each user. The computing resources of the system can be managed and shared between users in their analytical data computation with existing resource management systems such as YARN . Fairness is an important system issue in resource sharing. Only when the fairness is guaranteed for users, the resource sharing can be possible among different users. One of the most popular fair allocation policy is (weighted) max-min fairness , which maximizes the minimum resource allocation obtained by a user in a shared computing system. It has been used in YARN as well as other popular resource sharing systems such as Mesos . Unfortunately, we observe that the fair polices implemented in these systems are memoryless, i.e., allocating resources fairly at instant time without considering history information. We refer these policies with MemoryLess Resource Fairness (MLRF).
PROPOSED SYSTEM :
In this paper, we propose Long-Term Resource Fairness (LTRF) and show that it can solve the aforementioned problems. We start with the single-level resource allocation, and next extend it to hierarchical resource allocation. For the single-level resource allocation, we demonstrate that LRTF has good properties that are important for fair resource allocation on the shared cloud system. Five such properties are sharing incentive, cost-efficient workload incentive, resource-as-you contributed fairness, strategy- proofness and Pareto Efficiency.
LTRF provides incentives for users to submit meaningful workloads and share resources by ensuring that no user is better off in the exclusively non-sharing computing system than in the sharing case. Moreover, LTRF can guarantee the amount of resources a user should receive in terms of the amount of resources she contributed, in the case that her resource demand varies over time. In addition, LTRF is strategy-proof, as it can make sure that a user cannot get more resources by lying about her resource demand.
We have extended LTRF to support hierarchical resource allocation by considering the organizational priorities in resource allocations. We show that the combination of hierarchical and long-term resource allocation brings new challenges that do not exist in the single-level long-term resource allocation. A naive extension of LTRF can lead to the starvation. we propose a starvation-aware Hierarchical LTRF (H-LTRF) based on the timeout technique. We have implemented LTRF and H-LTRF in YARN by developing a long-term fair scheduler LTYARN.
1). LTRF can guarantee Service-Level Agreement(SLA) via minimizing the sharing loss and bringing much sharing benefit for each user whereas MLRF not;
2). the shared methods using either LTRF and MLRF can possibly get better performance than non-shared one, or at least as fast in the shared system as they do in the non-shared partitioning case. The performance finding is consistent with previous work such as Mesos
3). H-LTRF can address the possible starvation problem in hierarchical resource allocation.
This paper studies the resource allocation fairness for YARN in cloud environment. We find that, the existing max-min fair policy used in YARN, is not suitable in cloud computing system. We show that this is because of its memoryless resource allocation manner that can cause three serious problems, i.e., cost-inefficient workload submission problem, strategy-proofness problem and resource-as-you-contributed unfairness problem. To address these problems for YARN, we propose LTRF and H-LTRF for single-level and hierarchical resource allocation, respectively. We demonstrate that they are suitable for cloud computing system. Finally, we developed LTYARN, a long-term YARN fair scheduler for YARN and our experimental results validate the effectiveness of our solutions.