computer science crazy|
Joined: Dec 2008
20-09-2009, 04:07 PM
This paper describes cloud computing, a computing platform for the next generation of the Internet. The paper defines clouds, explains the business benefits of cloud computing, and outlines cloud architecture and its major components. Readers will discover how a business can use cloud computing to foster innovation and reduce IT costs. IntroductionEnterprises strive to reduce computing costs. Many start by consolidating their IT operations and later introducing virtualization technologies. Cloud computing takes these steps to a new leveland allows an organization to further reduce costs through improved utilization, reduced administration and infrastructure costs, and faster deployment cycles. The cloud is a next generation platform that provides dynamic resource pools, virtualization, and high availability.
Cloud computing describes both a platform and a type of application. A cloud computing platform dynamically provisions, configures, reconfigures, and deprovisions servers as needed. Cloud applications are applications that are extended to be accessible through the Internet. These cloud applications use large data centers and powerful servers that host Web applications andWeb services.
Cloud computing infrastructure accelerates and fosters the adoption of innovations Enterprises are increasingly making innovation their highest priority. They realize they need to seek new ideas and unlock new sources of value. Driven by the pressure to cut costs and growâ€simultaneouslyâ€they realize that itâ„¢s not possible to succeed simply by doing the same thingsbetter. They know they have to do new things that produce better results. Cloud computing enables innovation. It alleviates the need of innovators to find resources to develop, test, and make their innovations available to the user community. Innovators are free to focus on the innovation rather than the logistics of finding and managing resources that enable the innovation. Cloud computing helps leverage innovation as early as possible to deliver businessvalue to IBM and its customers. Fostering innovation requires unprecedented flexibility and responsiveness. The enterprise should provide an ecosystem where innovators are not hindered by excessive processes, rules, and resource constraints. In this context, a cloud computing service is a necessity. It comprises an automated framework that can deliver standardized services quickly and cheaply. Cloud computing is a term used to describe both a platform and type of application. A cloud computing platform dynamically provisions, configures, reconfigures, and deprovisions servers as needed. Servers in the cloud can be physical machines or virtual machines. Advanced clouds typically include other computing resources such as storage area networks (SANs),network equipment, firewall and other security devices. Cloud computing also describes pplications that are extended to be accessible through the Internet. These cloud applications use large data centers and powerful servers that host Webapplications and Web services. Anyone with a suitable nternet connection and a standardbrowser can access a cloud application.
A cloud is a pool of virtualized computer resources. A cloud can:
. Host a variety of different workloads, including batch-style back-end jobs and interactive,user-facing applications
. Allow workloads to be deployed and scaledout quickly through the rapid provisioning of virtual machines or physical machines
. Support redundant, self-recovering, highly scalable programming models that allow workloads to recover from many unavoidable hardware/software failures
. Monitor resource use in real time to enable rebalancing of allocations when needed Cloud computing environments support grid computing by quickly providing physical and virtualservers on which the grid applications can run
. Cloud computing should not be confused with grid computing
. Grid computing involves dividing a large task into many smaller tasks that runin parallel on separate servers
. Grids require many computers, typically in the thousands, andcommonly use servers, desktops, and laptops. Clouds also support nongrid environments, such as a three-tier Web architecture running standardor Web 2.0 applications. A cloud is more than a collection of computer resources because acloud provides a mechanism to manage those resources. Management includes provisioning, change requests, reimaging, workload rebalancing, deprovisioning, and monitoring.
Cloud computing infrastructures can allow enterprises to achieve more efficient use of their IThardware and software investments. They do this by breaking down the physical barriers inherentin isolated systems, and automating the management of the group of systems as a single entity. Cloud computing is an example of an ultimately virtualized system, and a natural evolution for data centers that employ automated systems management, workload balancing, and virtualizationtechnologies. A cloud infrastructure can be a cost efficient model for delivering information services, reducingIT management complexity, promoting innovation, and increasing responsiveness through realtime workload balancing. The Cloud makes it possible to launch Web 2.0 applications quickly and to scale up applicationsas much as needed when needed.
The platform supports traditional Javaâ€žÂ¢ and Linux, Apache, MySQL, PHP (LAMP) stack-based applications as well as new architectures such as MapReduceand the Google File System, which provide a means to scale applications across thousands ofservers instantly architecture
4.1. Cloud Computing Application Architecture
This gives the basic architecture of a cloud computing application. We know that cloud computing is the shift of computing to a host of hardware infrastructure thatis distributed in the cloud. The commodity hardware infrastructure consists of thevarious low cost data servers that are connected to the system and provide theirstorage and processing and other computing resources to the application. Cloudcomputing involves running applications on virtual servers that are allocated on thisdistributed hardware infrastructure available in the cloud. These virtual servers are made in such a way that the different service level agreements and reliability issues are met. There may be multiple instances of the same virtual server accessing the different parts of the hardware infrastructure available. This is to make sure that there are multiple copies of the applications which are ready to take over on another oneâ„¢s failure.
The virtual server distributes the processing between the infrastructure and the computing is done and the result returned. There will be a workload distribution management system, also known as the grid engine, for managing the different requests coming to the virtual servers. This engine will take care of the creation of multiple copies and also the preservation of integrity of the data that is stored in the infrastructure. This will also adjust itself such that even on heavier load, the processing is completed as per the requirements.
The different workload management systems are hidden from the users. For the user, the processing is done and the result is obtained. There is no question of where it was done and how it was done. The users are billed based on the usage of the system - as said before - the commodity is now cycles and bytes. The billing is usually on the basis of usage per CPU per hour or GB data transfer per hour.
4.2. Server Architecture
Cloud computing makes use of a large physical resource pool in the cloud. As said above, cloud computing services and applications make use of virtual server instances built upon this resource pool. There are two applications which help inmanaging the server instances, the resources and also the management of the resources by these virtual server instances. One of these is the Xen hypervisor which provides an abstraction layer between the hardware and the virtual OS so that the distribution of the resources and the processing is well managed. Another application that is widely used is the Enomalism server management system which is used for management of the infrastructure platform. When Xen is used for virtualization of the servers over the infrastructure, a thin software layer known as the Xen hypervisor is inserted between the serverâ„¢s hardware and the operating system. This provides an abstraction layer that allows each physical server to run one or more virtual servers, effectively decoupling the operating system and its applications from the underlying physical server.
The Xen hypervisor is a unique open source technology, developed collaboratively by the Xen community and engineers at over 20 of the most innovative data center solution vendors, including MD, Cisco, Dell, HP, IBM, Intel, Mellanox, Network Appliance, Novell, Red Hat, SGI, Sun, Unisys, Veritas, Voltaire, and Citrix. Xen is licensed under the GNU General Public License (GPL2) and is available at no charge in both source and object format. The Xen hypervisor is also exceptionally leanâ€œ less than 50,000 lines of code. That translates to extremely low overhead and near-native performance for guests. Xen re-uses existing device drivers (both closed and open source) from Linux, making device management easy. Moreover Xen is robust to device driver failure and protects both guests and the hypervisor from faulty or malicious drivers The Enomalism virtualized server management system is a complete virtual server infrastructure platform. Enomalism helps in an effective management of the resources. Enomalism can be used to tap into the cloud just as you would into a remote server. It brings together all the features such as deployment planning, load balancing, resource monitoring, etc. Enomalism is an open source application. It has avery simple and easy to use web based user interface. It has a module architecture which allows for the creation of additional system add-ons and plugins. It supports one click deployment of distributed or replicated applications on a global basis. It supports the management of various virtual environments including KVM/Qemu, Amazon EC2 and Xen, penVZ, Linux Containers, VirtualBox. It has fine grai ned user permissions and access privileges.
4.3. Map Reduce
Map Reduce is a software framework developed at Google in 2003 to support parallel computations over large (multiple petabyte) data sets on clusters of commodity computers. This framework is largely taken from Ëœmapâ„¢ and Ëœreduceâ„¢ functions commonly used in functional programming, although the actual semantics of the framework are not the same. It is a programming model and an associated implementation for processing and generating large data sets. Many of the real world tasks are expressible in this model. MapReduce implementations have been written in C++, Java and other anguages. Programs written in this functional style are automatically parallelized and executed on the cloud. The run-time system takes care of the details of partitioning the input data, scheduling the programâ„¢s execution across a set of machines, handling achine failures, and managing the required inter-machine communication. This allows programmers without any experience with parallel and distributed systems to easily utilize the resources of a largely distributed system. The computation takes a set of input key/value pairs, and produces a set of output key/value pairs. The user of the MapReduce library expresses the computation as two functions: Map and Reduce. Map, written by the user, takes an input pair and produces a set of intermediate key/value pairs. The MapReduce library groups together all intermediate values associated with the same intermediate key I and passes them to the Reduce
The Reduce function, also written by the user, accepts an intermediate key I and a set of values for that key. It merges together these values to form a possibly smaller set of values. Typically just zero or one output value is produced per Reduce invocation. The intermediate values are supplied to the userâ„¢s reduce function via an iterator. This allows us to handle lists of values that are too large to fit in memory.
MapReduce achieves reliability by parceling out a number of operations onthe set of data to each node in the network; each node is expected to report back periodically with completed work and status updates. If a node falls silent for longer than that interval, the master node records the node as dead, and sends out the nodeâ„¢s assigned work to other nodes. Individual operations use atomic operations for naming file outputs as a double check to ensure that there are not parallel conflicting threads running; when files are renamed, it is possible to also copy them to another name in addition to the name of the task (allowing for side-effects).
4.4. Google File System
Google File System (GFS) is a scalable distributed file system developed by Google for data intensive applications. It is designed to provide efficient, reliable access to data using large clusters of commodity hardware.It provides fault tolerance while running on inexpensive commodity hardware, and it delivers high aggregate performance to a large number of clients. Files are divided into chunks of 64 megabytes, which are only extremely rarely overwritten, or shrunk; files are usually appended to or read.
It is also designed and optimized to run on computing clusters, the nodes of which consist of cheap,commodity computers, which means precautions must be taken against the high failure rate of individual nodes and the subsequent data loss. Other design decisions select for high data throughputs, even when it comes at the cost of latency. The nodes are divided into two types:
one Master node and a large number of Chunkservers. Chunkservers store the data files, with each individual file broken up into fixed size chunks (hence the name) of about 64 megabytes, similar to clusters or sectors in regular file systems. Each chunk is assigned a unique 64-bit label, and logical mappings of files to constituent chunks are maintained. Each chunk is replicated several times throughout the network, with the minimum being three, but even more for files that have high demand or need more redundancy. The Master server doesnâ„¢t usually store the actual chunks, but rather all the metadata associated with the chunks, such as the tables mapping the 64-bit labels tochunk locations and the files they make up,the locations of the copies of the chunks,
what processes are reading or writing to a particular chunk, or taking a snapshot of the chunk pursuant to replicating it (usually at the instigation of the Master server, when, due to node failures, the number of copies of a chunk has fallen beneath the set number). All this metadata is kept current by the Master server periodically receiving updates from each chunk server (Heart-beat messages). Permissions for modifications are handled by a system of time-limited, expiring leases, where the Master server grants permission to a process for a finite period of time during which no other process will be granted permission by the Master server to modify the chunk. The modified chunkserver, which is always the primary chunk holder, then propagates the changes to the chunkservers with the backup copies. Thechanges are not saved until all chunkserversacknowledge, thus guaranteeing the
completion and atomicity of the operation. Programs access the chunks by firstquerying the Master server for the locationsof the desired chunks; if the chunks are not being operated on (if there are no outstanding leases), the Master replies with the locations, and he program then contacts and receives the data from thechunkserver directly. As opposed to manyfile systems, itâ„¢s not implemented in the kernel of an Operating System but accessedthrough a library to avoid overhead.
Hadoop is a framework for running applications on large cluster built ofcommodity hardware. The Hadoopframework transparently provides applications both reliability and data motion. Hadoopimplements the computation paradigm named MapReduce which was explained above. The application is divided into many small fragments of work, each of which may be executed or re-executed on any node in the cluster. In addition, it provides adistributed file system that stores data on the compute nodes, providing very high
aggregate bandwidth across the cluster. BothMapReduce and the distributed file system are designed so that the node failures are
automatically handled by the framework. Hadoop has been implemented making useof Java. In Hadoop, the combination of the
entire JAR files and classed needed to run a MapReduce program is called a job. All ofthese components are themselves
collected into a JAR which is usually referred to as the job file. To execute a job, itis submitted to a jobTracker and then executed.
Tasks in each phase are executed in a faulttolerant manner. If node(s) fail inthe middle of a computation the tasks assigned to them are re-distributed among theremaining nodes. Since we are using MapReduce, having many map and reduce tasksenables good load balancing and allows failed tasks to be re-run with smaller runtimeoverhead. The Hadoop MapReduce framework has master/slave architecture. It has a single master server or a jobTracker andseveral slave servers or taskTrackers, one per node in the cluster. The jobTracker is the point of interaction between the users andthe framework. Users submit jobs to the jobTracker, which puts them in a queue of pending jobs and executes them on a firstcomefirst-serve basis. The jobTrackermanages the assignment of MapReduce jobs to the taskTrackers. The taskTrackersexecute tasks upon instruction from the jobTracker and also handle data motion between the Ëœmapâ„¢ and Ëœreduceâ„¢ phases ofthe MapReduce job. Hadoop is a framework which has received awide industry adoption. Hadoop is used along with other cloud computing technologies like the Amazon services so as to make better use of the resources. There are many instances where Hadoop has beenused. Amazon makes use of Hadoop for processing millions of sessions which it uses for analytics. This is made use of in a cluster which has about 1 to 100 nodes. Facebook uses Hadoop to store copies of internal logs and dimension data sources a use it as a source for reporting/analytics and machine learning. The New York Times made use of Hadoop for large scale image conversions. Yahoo uses Hadoop to support research for advertisement systems
and web searching tools. They also use it to do scaling tests to support development of Hadoop
5. Cloud Computing Services
Even though cloud computing is a pretty new technology, there are many companies offering cloud computing services. Different companies like Amazon, Google, Yahoo, IBM and Microsoft are all players in the cloud computing services industry. But Amazon is the pioneer in the cloud computing industry with serviceslike EC2 (Elastic Compute Cloud) and S3(Simple Storage Service) dominating the industry. Amazon has an expertise in this industry and has a small advantage over the others because of this. Microsoft has good knowledge of the fundamentals of cloudscience and is building massive data centers. IBM, the king of business computing and traditional supercomputers, teams up with Google to get a foothold in the clouds. Google is far and away the leader in cloud computing with the company itself built from the ground up on hardware.
5.1. Amazon Web Services
The ËœAmazon Web Servicesâ„¢ is the set of cloud computing services offered byAmazon. It involves four different services.
They are Elastic Compute Cloud (EC2), Simple Storage Service (S3), Simple QueueService (SQS) and Simple Database
1. Elastic Compute Cloud (EC2)
Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides resizable compute capacity in the cloud. It is designed to make webscale computing easier for developers. It provides on-demand processing power. Amazon EC2â„¢s simple web service interface allows you to obtain and configure capacity with minimal friction. It provides you with completecontrol of your computing resources and lets you run on Amazonâ„¢s proven computing environment. Amazon EC2 reduces the time required to obtain and
boot new server instances to minutes, allowing you to quickly scale capacity,both up and down, as your computing
requirements change. Amazon EC2 changes the economics of computing by allowing you to pay only for capacity
that you actually use. Amazon EC2 p ovides developers the tools to build failure resilient applications and isolate themselves from common failure scenarios. Amazon EC2 presents a true virtual computing environment, allowing you to use web service interfaces to requisition machines for use, load them with your custom application environment, manage your networkâ„¢s access permissions, and run your image using as many or few systems as you desire. To set up an Amazon EC2 node we have tocreate an EC2 node
configuration which consists of all our applications, libraries, data and associated configuration settings. Thisconfiguration is then saved as an AMI (Amazon Machine Image). There are alsoseveral stock instances of Amazon AMIs available which can be customized and used. We can then start, terminate and monitor as many instances of the AMI as needed.Amazon EC2 enables you to increase or decrease capacity withinminutes. You can commission one, hundreds or even thousands of server instances simultaneously. Thus the applications can automatically scale itselfup and down depending on its needs. You have root access to each one, and you can interact with them as you would any machine. You have the choice of several instance types, allowing you to select a configuration of memory, PU, and instance storage that is optimal for your application. Amazon EC2 offers a highly reliable environment where replacement instances can be rapidly and reliably commissioned. Amazon EC2 provides web service
interfaces to configure firewall settings that control network access to and between groups of instances. You will be charged at the end of each month for your EC2 resources actually consumed. So charging will be based on the actual usage of the resources.
2. Simple Storage Service (S3)
S3 or Simple Storage Service offers cloud computing storage service. It offers services for storage of data in the cloud. It provides a high-availability large-store database. It provides a simple SQL-like language. It has been designed for interactive online use. S3 is
storage for the Internet. It is designed to make web-scale computing easier for developers. S3 provides a simple web services interface that can be used to store and retrieve any amount of data, at any time, from anywhere on the web. It gives any developer access to the same highly scalable, reliable, fast, inexpensive data storage infrastructure that Amazon uses to run its own global network of web sites. Amazon S3 allows write, read and delete of objects containing from 1 byte to 5 gigabytes of data each. The number of objects that you can store is unlimited. Each object is stored in a bucket and retrieved via a unique developer-assigned key. A bucket can be located anywhere in Europe or the Americas but can be accessed from anywhere. Authentication mechanisms are provided to ensure that the data is kept secure from unauthorized access. Objects can be made private or public, and rights can be granted to specific users for particular objects. Also the S3 service also works with a pay only for what you use method of payment.
3. Simple Queue Service (SQS)
Amazon Simple Queue Service (SQS) offers a reliable, highly scalable, hosted queue for storing messages as they travel between computers. By using SQS, developers can simply move data between distributed components of their applications that perform different tasks, without losing messages or requiring each component to be always available
With SQS, developers can create an unlimited number of SQS queues, each of which can send and receive an unlimited number of messages Messages can be retained n a queue for up to 4 days. It is simple, reliable,secure and scalable.
4. Simple Database Service (SDB)
Amazon SimpleDB is a web service for running queries on structured data in real time. This service works in close conjunction with the Amazon S3 and EC2, collectively providing the ability to store, process and query data sets in the cloud. These services are designed to make web-scale computing easier and more cost-effective to developers. Traditionally, this type of functionality is accomplished with a clustered relational database, which requires a sizable upfront investment and often requires a DBA to maintain and administer them. Amazon SDB provides all these without the operational complexity. It requires no schema, automatically indexes your data and provides a simpleAPI for storage and access. Developers gain access to the different functionalities from within the Amazonâ„¢s proven computing environment andare able to scale instantly and need to pay only for what they use.
5.2. Google App Engine
Google App Engine lets you run your web applications on Googleâ„¢sinfrastructure. App Engine applications areeasy to build, easy to maintain, and easy to scale as your traffic and data storage needs grow. You can serve your app using a free domain name on the appspot.com domain, or use Google Apps to serve it from your own domain. You can share your application with the world, or limit access to members of your organization. App Engine costs nothing to get started. Sign up for a free account, and you can develop and publish your application at no charge and with no obligation. A free account can use up to 500MB of persistent storage and enough CPU and bandwidth for about 5 million page views a month. Google App Engine makes it easy to build an application that runs reliably, even under heavy load and with large amounts of data. The environment includes the following features
:Â¢ dynamic web serving, with full support for common web technologies
Â¢ persistent storage with queries, sorting and transactions
Â¢ automatic scaling and load balancing
Â¢ APIs for authenticating users and sending email using Google Accounts
Â¢ a fully featured local development environment that simulates Google AppEngine on your computer
Google App Engine applications are implemented using the Python programming language. The runtimeenvironment includes the full Python language and most of the Python standard library. Applications run in a secure environment that provides limited access to the underlying operating system. These limitations allow App Engine to distribute web requests for the application across multiple servers, and start and stop servers to meet traffic demands. App Engine includes a service API for integrating with Google Accounts.
Your application can allow a user to sign in with a Google account, and access the email address and displayable name associated with the account. Using Google Accounts lets the user start using your application faster, because the user may not
need to create a new account. It also saves you the effort of implementing a user account system just for your application
App Engine provides a variety of services that enable you to perform common operations when managing your application. The following APIs are provided to access these services: Applications can access resources on the Internet, such as web services or ther data, using App Engineâ„¢s URL fetch service. Applications can send email messages using App Engineâ„¢s mail service. The mail service uses Google infrastructure to send email messages. TheImage service lets your application manipulate images. With this API, you can resize, crop, rotate and flip images inJPEG and PNG formats.In theory, Google claims App Engine can scale nicely. But Google currently places a limit of 5 million hits per month on each application. This limit nullifies App Engineâ„¢s scalability, because any small, dedicated server can have this performance. Google will eventually allow webmasters to go beyond this limit (if they pay).
Cloud computing is a powerful new abstraction for large scale data processing systems which is scalable, reliable and
available. In cloud computing, there are large self-managed server pools available whichreduces the overhead and eliminates management headache. Cloud computing services can also grow and shrink according to need. Cloud computing is particularly
valuable to small and medium businesses, where effective and affordable IT tools are critical to helping them become more
productive without spending lots of money on in-house resources and technical equipment. Also it is a new emerging
architecture needed to expand the Internet to become the computing platform of the future.
Use Search at http://topicideas.net/search.php wisely To Get Information About Project Topic and Seminar ideas with report/source code along pdf and ppt presenaion
Active In SP
Joined: Jun 2010
02-10-2010, 12:54 PM
cloud computing.docx (Size: 1.32 MB / Downloads: 748)
Cloud computing has gained a lot of hype in the current world of I.T. Cloud computing is said to be the next big thing in the computer world after the internet.
In general terms the Cloud computing is referred to as anything that uses internet and computing is done at some remote location and the result are displayed on the user screen and the user access the cloud using the familiar web browser. This definition is true to some extent however not completely.
Cloud computing is a broad, new technology and young as of now. The industry still struggles to define as to what to call as Cloud computing and what not to.
Like any new IT trend, cloud computing gets its fair share of hype, and with it comes a multitude of vendors that use the term in ways it was never intended for, making it devoid of any sense. When pushed to the extreme, a simple server connected to a network seems to qualify as a cloud, allowing pundits to mock the concept. Yet cloud computing is not a passing fad. It is a major step forward in the development of distributed computing, and one that will reshape the IT industry. But for it to happen, we must agree on a clear definition of the concept first, and the less technical it is, the better.
Here is how Wikipedia defines cloud computing:
“Cloud computing is the provision of dynamically scalable and often virtualized resources as a service over the Internet. Users need not have knowledge of, expertise in, or control over the technology infrastructure in the “cloud” that supports them. Cloud computing services often provide common business applications online that are accessed from a web browser, while the software and data are stored on the servers.”
The definition we propose is borrowed from Neil Ward‐Dutton, who works for MWD Advisors, an advisory firm which focuses on issues surrounding IT‐business alignment. In a post released in June 2009, Neil outlined the seven elements of cloud computing value, reproduced here with the author’s permission and some minor editing, suggested in part by Gartner’s Daryl Plummer.
In a nutshell, cloud computing can be defined as a set of computing and storage resources providing an application platform as a service. This platform is characterized by a unique set of economic, architectural, and strategic elements of value, which distinguishes it from anything that has been available so far.
Dynamic computing infrastructure
Cloud computing requires a dynamic computing infrastructure. The foundation for the dynamic infrastructure is a standardized, scalable, and secure physical infrastructure. There should be levels of redundancy to ensure high levels of availability, but mostly it must be easy to extend as usage growth demands it, without requiring architecture rework. Next, it must be virtualized.
Today, virtualized environments leverage server virtualization (typically from VMware, Microsoft, or Xen) as the basis for running services. These services need to be easily provisioned and de-provisioned via software automation. These service workloads need to be moved from one physical server to another as capacity demands increase or decrease. Finally, this infrastructure should be highly utilized, whether provided by an external cloud provider or an internal IT department. The infrastructure must deliver business value over and above the investment.
A dynamic computing infrastructure is critical to effectively supporting the elastic nature of service provisioning and de-provisioning as requested by users while maintaining high levels of reliability and security. The consolidation provided by virtualization, coupled with provisioning automation, creates a high level of utilization and reuse, ultimately yielding a very effective use of capital equipment
IT service-centric approach
Cloud computing is IT (or business) service-centric. This is in stark contrast to more traditional system- or server- centric models. In most cases, users of the cloud generally want to run some business service or application for a specific, timely purpose; they don’t want to get bogged down in the system and network administration of the environment. They would prefer to quickly and easily access a dedicated instance of an application or service. By abstracting away the server-centric view of the infrastructure, system users can easily access powerful pre-defined computing environments designed specifically around their service.
An IT Service Centric approach enables user adoption and business agility – the easier and faster a user can perform an administrative task the more expedient the business moves, reducing costs or driving revenue.
Active In SP
Joined: Jun 2010
12-10-2010, 03:00 PM
cloud computing ppt.pptx (Size: 317.3 KB / Downloads: 579)
This article is presented by:
What is cloud computing????
Cloud computing is Internet-based computing, whereby shared resources, software, and information are provided to computers and other devices on demand…
The name cloud computing was inspired by the cloud symbol that's often used to represent
the Internet in flowcharts and diagrams.
Introduction Of cloud computing
For understanding cloud computing very deeply
Let's assume you're an executive at a large corporation..
Your particular responsibilities include making sure that all of your employees have the right hardware and software they need to do their jobs.
Buying computers for everyone isn't enough -- you also have to purchase software or software licenses to give employees the tools they require.
Whenever you have a new hire, you have to buy more software or make sure your current software license allows another user.
It's so stressful that you find it difficult to go
to sleep on your huge pile of money every night.
Soon, there may be an alternative for executives like you.
Instead of installing a suite of software for each computer, you'd only have to load one application.
That application would allow workers to log into a Web-based service which hosts all the programs the user would need for his or her job.
Remote machines owned by another company would run everything from e-mail to word processing to complex data analysis programs.
It's called cloud computing,.. and it could change the entire computer industry…
Real life example is:-if we go into restaurant ,,,we eat food and give money to them,,,,
Here restaurant acts as cloud computing company…
And food acts as software of company which are we using..
These are the companies which are hardly trying to get into business of Cloud computing…
AMAZON is very successful among them…..
Active In SP
Joined: Jun 2010
29-11-2010, 10:09 AM
New Microsoft Office Word Document.docx (Size: 715.29 KB / Downloads: 315)
Cloud computing is Internet ("cloud") based on development and use of computer technology ("computing").It is a style of computing in which dynamically scalable and often virtualised resources are provided as a service over the internet.Users need not have knowledge of, expertise in, or control over the technology infrastructure "in the cloud" that supports them.
The concept incorporates infrastructure as a service (IaaS), platform as a service (PaaS) and software as a service (SaaS) as well as Web 2.0 and other recent (ca. 2007-2009) technology trends which have the common theme of reliance on the Internet for satisfying the computing needs of the users. Examples of SaaS vendors include Salesforce.com and Google Apps which provide common business applications online that are accessed from a web browser, while the software and data are stored on the servers.A cloud is a pool of virtualized computer resources. A cloud can:
1.Host a variety of different workloads,including batch-style back-end jobs and interactive,user- facing applications.
2.Allow workloads to be deployed and scaled-out quickly through the rapid provisioning of virtual machines or physical machines.
3.Support redundant, self-recovering,highly scalable programming models that allow workloads to recover from many unavoidable hardware/software failures.
4.Monitor resource use in real time to enable rebalancing of allocations when needed.
The underlying concept dates back to 1960 when John McCarthy opined that "computation may someday be organized as a public utility"; indeed it shares characteristics with service bureaus which date back to the 1960s.The term cloud had already come into commercial use in the early 1990s to refer to large ATM networks.By the turn of the 21st century,the term "cloud computing" had started to appear, although most of the focus at this time was on Software as a service (SaaS).
In 1999, Salesforce.com was established by Marc Benioff, Parker Harris,and his fellows.They applied many technologies of consumer web sites like Google and Yahoo! to business applications.
IBM extended these concepts in 2001,as detailed in the Autonomic Computing Manifesto-which described advanced automation techniques such as self-monitoring, self-healing, self-configuring, and self-optimizing in the management of complex IT systems with heterogeneous storage, servers, applications, networks, security mechanisms, and other system elements that can be virtualized across an enterprise.
Amazon.com played a key role in the development of cloud computing by modernizing their data centres after the dot-com bubble and, having found that the new cloud architecture resulted in significant internal efficiency improvements,providing access to their systems by way of Amazon Web Services in 2002 on a utility computing basis.
2007 saw increased activity,with Google,IBM and a number of universities embarking on a large scale cloud computing research project and implimentation, around the time the term started gaining popularity in the mainstream press.
WORKING OF CLOUD COMPUTING
In cloud computing you only need to load one application.This application would allow workers to log into a Web-based service which hosts all the programs the user would need for his or her job. Remote machines owned by another company would run everything from e-mail to word processing to complex data analysis programs.It's called cloud computing, and it could change the entire computer industry.
In a cloud computing system, there's a significant workload shift.Local computers no longer have to do all the heavy lifting when it comes to running applications.The network of computers that make up the cloud handles them instead. Hardware and software demands on the user's side decrease.The only thing the user's computer needs to be able to run is the cloud computing system's interface software, which can be as simple as a Web browser, and the cloud's network takes care of the rest.
Cloud architecture,the systems architecture of the software systems involved in the delivery of cloud computing, comprises hardware and software designed by a cloud architect who typically works for a cloud integrator. It typically involves multiple cloud components communicating with each other over application programming interfaces, usually web services.
Cloud architecture extends to the client, where web browsers and/or software applications access cloud applications.
Cloud storage architecture is loosely coupled, where metadata operations are centralized enabling the data nodes to scale into the hundreds, each independently delivering data to applications or users.
A cloud application leverages the Cloud in software architecture,often eliminating the need to install and run the application on the customer's own computer,thus alleviating the burden of software maintenance, ongoing operation, and support.
2. CLOUD CLIENTS
A cloud client consists of computer hardware and/or computer software which relies on the cloud for application delivery, or which is specifically designed for delivery of cloud services and which, in either case, is essentially useless without it. For example:Mobile ,Thin client ,Thick client / Web browser .
3. CLOUD INFRASTRUCTURE
Cloud infrastructure,such as Infrastructure as a service,is the delivery of computer infrastructure, typically a platform virtualization environment,as a service.For example:grid computing ,Management , Compute ,Platform.
4. CLOUD PLATFORMS
A cloud platform,such as Paas, the delivery of a computing platform,and/or solution saas,facilitates deployment of applications without the cost and complexity of buying and managing the underlying hardware and software layers.
5. CLOUD SERVICES
A cloud service includes "products, services and solutions that are delivered and consumed in real-time over the Internet".For example Web Services ("software system[s] designed to support interoperable machine-to-machine interaction over a network") which may be accessed by other cloud computing components, software, e.g., Software plus services, or end users directly.
6. CLOUD STORAGE
Cloud storage involves the delivery of data storage as a service, including database-like services, often billed on a utility computing basis, e.g., per gigabyte per month. For example Database ,Network attached storage ,Web service .
TYPES OF CLOUDS
1. PUBLIC CLOUD
Public cloud or external cloud describes cloud computing in the traditional mainstream sense, whereby resources are dynamically provisioned on a fine-grained, self-service basis over the Internet, via web applications/web services, from an off-site third-party provider who shares resources and bills on a fine-grained utility computing basis.
2. HYBRID CLOUD
A hybrid cloud environment consisting of multiple internal and/or external providers "will be typical for most enterprises".
3. PRIVATE CLOUD
Private cloud and internal cloud are neologisms that some vendors have recently used to describe offerings that emulate cloud computing on private networks.These (typically virtualisation automation) products claim to "deliver some benefits of cloud computing without the pitfalls", capitalising on data security, corporate governance, and reliability concerns.They have been criticised on the basis that users "still have to buy, build, and manage them" and as such do not benefit from lower up-front capital costs and less hands-on management ,essentially "[lacking] the economic model that makes cloud computing such an intriguing concept".While an analyst predicted in 2008 that private cloud networks would be the future of corporate IT, there is some contention as to whether they are a reality even within the same firm.
ROLES PLAYED IN CLOUD COMPUTING
1. CLOUD COMPUTING PROVIDERS
A cloud computing provider or cloud computing service provider owns and operates live cloud computing systems to deliver service to third parties.Usually this requires significant resources and expertise in building and managing next-generation data centers.Some organisations realise a subset of the benefits of cloud computing by becoming "internal" cloud providers and servicing themselves, although they do not benefit from the same economies of scale and still have to engineer for peak loads. The barrier to entry is also significantly higher with capital expenditure required and billing and management creates some overhead.Nonetheless, significant operational efficiency and agility advantages can be realised, even by small organisations, and server consolidation and virtualization rollouts are already well underway.Amazon.com was the first such provider,modernising its data centers which,like most computer networks, were using as little as 10% of its capacity at any one time just to leave room for occasional spikes.This allowed small, fast-moving groups to add new features faster and easier, and they went on to open it up to outsiders as Amazon Web Services in 2002 on a utility computing basis.
Players in the cloud computing service provision game include the likes of Amazon, Google, Hewlett Packard, IBM, Intel, Microsoft, Salesforce, SAP and Yahoo!
A user is a consumer of cloud computing.The privacy of users in cloud computing has become of increasing concern.The rights of users are also an issue, which is being addressed via a community effort to create a bill of rights.
A vendor sells products and services that facilitate the delivery, adoption and use of cloud computing.For example:Computer hardware,Storage,infrastructure,Computer software,Operating systems ,Platform virtualization.
Active In SP
Joined: Feb 2011
24-02-2011, 03:13 PM
Cloud computing.docx (Size: 147.83 KB / Downloads: 216)
Cloud computing is Internet-based computing, whereby shared resources, software, and information are provided to computers and other devices on demand, like theelectricity grid.
Cloud computing is a paradigm shift following the shift from mainframe to client–serverin the early 1980s. Details are abstracted from the users, who no longer have need for expertise in, or control over, the technology infrastructure "in the cloud" that supports them. Cloud computing describes a new supplement, consumption, and delivery model for IT services based on the Internet, and it typically involves over-the-Internetprovision of dynamically scalable and often virtualized resources. It is a byproduct and consequence of the ease-of-access to remote computing sites provided by the Internet. This frequently takes the form of web-based tools or applications that users can access and use through a web browser as if it were a program installed locally on their own computer.. NIST provides a somewhat more objective and specific definition here. The term "cloud" is used as a metaphor for the Internet, based on the cloud drawing used in the past to represent the telephone network, and later to depict the Internet in computer network diagrams as anabstraction of the underlying infrastructure it represents. Typical cloud computing providers deliver common business applications online that are accessed from another Web service or software like a Web browser, while the software and data are stored on servers. A key element of cloud computing is customization and the creation of a user-defined experience.
Most cloud computing infrastructures consist of services delivered through common centers and built on servers. Clouds often appear as single points of access for all consumers' computing needs. Commercial offerings are generally expected to meet quality of service (QoS) requirements of customers, and typically include SLAs. The major cloud service providers
Cloud computing derives characteristics from, but should not be confused with:
1. Autonomic computing — "computer systems capable of self-management".
2. Client–server model – Client–server computing refers broadly to any distributed application that distinguishes between service providers (servers) and service requesters (clients).
3. Grid computing — "a form of distributed computing and parallel computing, whereby a 'super and virtual computer' is composed of acluster of networked, loosely coupled computers acting in concert to perform very large tasks"
4. Mainframe computer — powerful computers used mainly by large organizations for critical applications, typically bulk data processing such as census, industry and consumer statistics, enterprise resource planning, and financial transaction processing.
5. Utility computing — the "packaging of computing resources, such as computation and storage, as a metered service similar to a traditional public utility, such as electricity";
6. Peer-to-peer – a distributed architecture without the need for central coordination, with participants being at the same time both suppliers and consumers of resources (in contrast to the traditional client–server model).
In general, cloud computing customers do not own the physical infrastructure, instead avoiding capital expenditure by renting usage from a third-party provider. They consume resources as a service and pay only for resources that they use. Many cloud-computing offerings employ the utility computing model, which is analogous to how traditional utility services (such as electricity) are consumed, whereas others bill on asubscription basis. Sharing "perishable and intangible" computing power among multiple tenants can improve utilization rates, as servers are not unnecessarily left idle (which can reduce costs significantly while increasing the speed of application development). A side-effect of this approach is that overall computer usage rises dramatically, as customers do not have to engineer for peak load limits. In addition, "increased high-speed bandwidth" makes it possible to receive the same response times from centralized infrastructure at other sites.
Cloud engineering is a field of engineering that generally deals with the lifecycle of cloud computing solutions, including analysis, design, development, testing, integration, buildout, delivery, operation and consumption of cloud products and services.
Cloud computing users avoid capital expenditure (CapEx) on hardware, software, and services when they pay a provider only for what they use. Consumption is usually billed on a utility (resources consumed, like electricity) or subscription (time-based, like a newspaper) basis with little or no upfront cost. Other benefits of this time sharing-style approach are low barriers to entry, shared infrastructure and costs, low management overhead, and immediate access to a broad range of applications. In general, users can terminate the contract at any time (thereby avoiding return on investment risk and uncertainty), and the services are often covered by service level agreements (SLAs) with financial penalties.
According to Nicholas Carr, the strategic importance of information technology is diminishing as it becomes standardized and less expensive. He argues that the cloud computing paradigm shift is similar to the displacement of electricity generators by electricity grids early in the 20th century.
Although companies might be able to save on upfront capital expenditures, they might not save much and might actually pay more for operating expenses. In situations where the capital expense would be relatively small, or where the organization has more flexibility in their capital budget than their operating budget, the cloud model might not make great fiscal sense. Other factors impacting the scale of any potential cost savings include the efficiency of a company's data center as compared to the cloud vendor's, the company's existing operating costs, the level of adoption of cloud computing, and the type of functionality being hosted in the cloud.
Among the items that some cloud hosts charge for are instances (often with extra charges for high-memory or high-CPU instances); data transfer in and out; storage (measured by the GB-month); I/O requests; PUT requests and GET requests; IP addresses; and load balancing. In some cases, users can bid on instances, with pricing dependent on demand for available instances.
Active In SP
Joined: Feb 2011
25-02-2011, 10:24 AM
CC.ppt (Size: 12.19 MB / Downloads: 304)
What is Cloud Computing?
• “Cloud” is simply a metaphor for the internet
• Users do not have or need knowledge, control, ownership in the computer infrastructure
• Users simply rent or access the software, paying only for what they use
• Simple Introduction to Cloud Computing
History of Cloud Computing
• Concept dating back to the 1960’s by John McCarthy, a computer scientist, brought up the idea that "computation may someday be organized as a public utility”
• Idea that revolutionized Cloud Computing: Moving from clustering computing to grid computing
• “In some ways, the cloud is a natural next step from the grid-utility model,” said Frank Gens, an analyst at the research firm IDC
The “Super Computer” in the Sky
• Two ways of building a “super computer” with enough power that users can plug into according to their needs at a particular time:
– Blue Gene Approach
– Google's Approach
Early Leaders in the Industry
• In 2007, Microsoft made available free software, http://www.live.com that connects its Windows operating system to software services delivered on the Internet
• Launched in July 2002, Amazon Web Services provided online services for other web sites or client-side applications
• 3tera launched its AppLogic system in February, 2006
• IBM’s system introduced in the mid 2000’s is called Blue Cloud
• Data residency – time delay between data being requested and delivered
• Security and confidentiality of data being stored outside the company
• Business buy-in; convincing companies of the infrastructure and reliability
• Cloud computing is an emerging technology that is revolutionizing IT infrastructures and flexibility, and software as a service (SaaS)
• During this economic time of recession, there are huge cost-reduction pressures and cloud computing allows businesses to do just that by tapping into cloud computing platforms on a pay-as-you-go basis
• Customer retention is vital, especially today in our economy
Active In SP
Joined: Feb 2011
08-03-2011, 12:38 PM
cloud computing.ppt (Size: 2.45 MB / Downloads: 335)
• Key to the definition of cloud computing is the “cloud” itself. Here , the cloud is a large group of interconnected computers.
• These computers can be personal computers or network servers; they can be public or private.
• The actual term “cloud” is borrowed from telephony in that telecommunications companies,who until 1990 offerd point to point data circuits,VPN
What Is Cloud Computing?
Cloud computing is using the Internet to access someone else’s software running on someone else’s hardware in someone else’s data centre while paying only for what you use.
SaaS = Software as a Service (eg: Gmail, Google Calendar,...)
PaaS = Platform as a Service(eg: Google Apps)
IaaS = Infrastructure as a Service (eg: Amazon EC2)
BENEFITS OF CLOUD COMPUTING:
REDUCTION OF HARDWARE COSTS
Thus cloud computing provide a super-computing power .This cloud of computers extends beyond a single company or enterprise.
The applications and data served by the cloud are available to broad group of users, cross-enterprise and cross-platform.
Active In SP
Joined: Feb 2011
10-03-2011, 12:44 PM
cloud comp.docx (Size: 1.22 MB / Downloads: 149)
With the significant advances in Information and Communications Technology over the last half century, there is an increasingly perceived vision that computing will one day be the 5th utility (after water, electricity, gas,and telephony). This computing utility, like all other four existing utilities, will provide the basic level of computing service that is considered essential to meet the everyday needs of the general community. To deliver this vision, a number of computing paradigms have been proposed, of which the latest one is known as Cloud computing.Cloud computing promises to increase the velocity with which applications are deployed, increase innovation,and lower costs, all while increasing business agility. Everyone has an opinion on what is cloud computing, It can be the ability to rent a server or a thousand servers and run a geophysical modeling application on the most powerful systems available anywhere. It can be the ability to rent a virtual server,load software on it, turn it on and off at will, or clone it ten times to meet a sudden workload demand. It can be storing and securing immense amounts of data that is accessible only by authorized applications and users. Cloud computing can be the ability to use applications on the Internet that store and protect data while providing a service anything including email,sales force automation and tax preparation. It can be using a storage cloud to hold application, business, and personal data. And it can be the ability to use a handful of Web services to integrate photos, maps, and GPS information to create a mashup in customer Web browsers.Cloud computing increases profitability by improving resource utilization. Pooling resources into large clouds drives down costs and increases utilization by delivering resources only for as long as those resources are needed. Cloud computing allows individuals, teams, and organizations to streamline procurement processes and eliminate the need to duplicate certain computer administrative skills related to setup, configuration, and support.This paper introduces the value of implementing cloud computing and explains the business benefits of cloud computing, and outlines cloud architecture and its major components.
Building on established trends
Cloud computing builds on established trends for driving the cost out of the delivery of services while increasing the speed and agility with which services are deployed. Cloud computing incorporates virtualization, on-demand deployment, Internet delivery of services, and open source software. From perspective, everything is new because cloud computing changes how we invent, develop, deploy, scale, update,maintain, and pay for applications and the infrastructure on which they run. In this chapter, we examine the trends and how they have become core to what cloud computing is all about.
The Nature of Cloud Computing:
Virtual machines as the standard deployment object
Over the last several years, virtual machines have become a standard deployment object. Virtualization further enhances flexibility because it abstracts the hardware to the point where software stacks can be deployed and redeployed without being tied to a specific physical server. Virtualization enables a dynamic datacenter where servers provide a pool of resources that are harnessed as needed, and where the relationship of applications to compute, storage, and network resources changes dynamically in order to meet both workload and business demands. Using virtual machines as deployment objects is sufficient for 80 percent of usage, and it helps to satisfy the need to rapidly deploy and scale applications.
The on-demand, self-service, pay-by-use model:
The on-demand, self-service, pay-by-use nature of cloud computing is also an extension of established trends. From an enterprise perspective, the on-demand nature of cloud computing helps to support the performance and capacity aspects of service-level objectives. The self-service nature of cloud computing allows organizations to create elastic environments that expand and contract based on the workload and target performance parameters. And the pay-by-use nature of cloud computing may take the form of equipment leases that guarantee a minimum level of service from a cloud provider. IT organizations have understood for years that virtualization allows them to quickly and easily create copies of existing environments sometimes involving multiple virtual machines to support test,development, and staging activities. This lightweight deployment model has already led to a “Darwinistic” approach to business development where beta versions of software are made public and the market decides which applications deserve to be scaled and developed further or quietly retired. Cloud computing extends this trend through automation. machines and establish network relationships between them. Instead of requiring a long-term contract for services
with an IT organization or a service provider, clouds work on a pay-by-use, or payby- the-sip model where an application may exist to run a job for a few minutes or
hours, or it may exist to provide services to customers on a long-term basis. The ability to use and pay for only the resources used shifts the risk of how much infrastructure to purchase from the organization developing the application to the cloud provider
Another consequence of the self-service, pay-by-use model is that applications are composed by assembling and configuring appliances and open-source software as much as they are programmed.
Applications and architectures that can be refactored in order to make the most use of standard components are those that will be the most successful in leveraging the benefits of cloud computing.
Likewise, application components should be designed to be composable by building them so they can be consumed easily. This requires having simple, clear functions, and well-documented APIs.
Building large, monolithic applications is a thing of the past as the library of existing tools that can be used directly or tailored for a specific use becomes ever larger..
Active In SP
Joined: Feb 2011
12-03-2011, 09:57 AM
Abou Sofyane Khedim
454160_634160700708357500.ppt (Size: 1.93 MB / Downloads: 233)
a smart way to reduce IT cost, CO2 footprint and provide services anywhere, anytime
What is cloud computing
• “Cloud” is actually a metaphor for the Internet.
• User do not have or need knowledge, control, ownership in the computer infrastructure.
• Users simply rent or access the software, paying only what they use
What is cloud computing
Cloud computing is using the Internet to access someone else’s software running on someone else’s hardware in someone else’s data centre while paying only for what you use.
SaaS = Software as a Service
(eg: Gmail, Google Calendar,...)
PaaS = Platform as a Service
(eg: Google Apps)
IaaS = Infrastructure as a Service
(eg: Amazon EC2)
Why cloud computing
• Data centres are notoriously underutilised, often idle 85% of the time
o Over provisioning
o Insufficient capacity planning and sizing
o Improper understanding of scalability requirements etc
• including thought leaders from Gartner, Forrester, and IDC—agree that this new model offers significant advantages for fast-paced startups, SMBs and enterprises alike.
• Cost effective solutions to key business demands
• Move workloads to improve efficiency
Why cloud computing - Traditional IT
Why cloud computing - Cloud
Why cloud computing
• Instant Elasticity
o Scale up / down
• Reduce IT Cost
o Pay only what you use
• Green / CO2 footprint reduced
o The servers unused are switched off automatically.
o No need any more powerful computer to run powerful software
Healthcare in the cloud
• IT requirements
o functional and non-functional are more complex.
• We care about where the data are stored and who take care of them.
• The public cloud cannot really meets these needs.
o The cloud vendor can keep our data anywhere.
• The private cloud could pretend to be more secure but the maintenance's cost of the infrastructure is still present.
Healthcare in the cloud
• The hybrid cloud could be the solution.
o The private side could only be used to process, store sensitive data.
o The public side could be used to compute non sensitive data.
• BUT difficult to implement.
o Strong policies rules must be implemented to target the both cloud.
Healthcare in the cloud
• Understanding that cloud infrastructure brings many advantages for the hospitals nevertheless the public cloud architecture, most economic with endless resources cannot really be enjoyed by hospitals.
• The best solution could be to introduce a new cloud architecture which could take only advantages of the 3 existing architecture.
• The government could create their own private cloud and propose IaaS/ PaaS / SaaS to the UK hospitals.
Healthcare in the cloud
Who already used the cloud
• Peter Harkins at The Washington Post: 200 EC2 instances (1,407 server hours), convert 17,481 pages of Hillary Clinton’s travel documents within 9 hours
• The New York Times used 100 Amazon EC2 instances + Hadoop application to recognise 4TB of raw TIFF image into 1.1 million PDFs in 24 hours ($240)
• The U.S. Defense Department is offering now cloud computing services.
Active In SP
Joined: Feb 2011
13-03-2011, 05:41 PM
I want to do seminar and presentation on topic Software as a service. I need 30-36 page of report. so I reqest u to send me the Full report and PPT of SaaS to this id firstname.lastname@example.org.
Awaiting your reply,
Active In SP
Joined: Feb 2011
16-03-2011, 10:58 AM
20090911_VirtualizationAndCloud.ppt (Size: 532.5 KB / Downloads: 278)
Virtualization and Cloud Computing
An opening caveat ...
This talk is based on speeches at conferences, discussions with people in industry, and some experimentation.
A lot of people think they will make a lot of money – so there is lots of hype!
But there seems to be something fundamental going on.
Two Technologies for Agility
The ability to run multiple operating systems on a single physical system and share the underlying hardware resources*
“The provisioning of services in a timely (near on instant), on-demand manner, to allow the scaling up and down of resources”**
The Traditional Server Concept
And if something goes wrong ...
The Traditional Server Concept
System Administrators often talk about servers as a whole unit that includes the hardware, the OS, the storage, and the applications.
Servers are often referred to by their function i.e. the Exchange server, the SQL server, the File server, etc.
If the File server fills up, or the Exchange server becomes overtaxed, then the System Administrators must add in a new server.
The Traditional Server Concept
Unless there are multiple servers, if a service experiences a hardware failure, then the service is down.
System Admins can implement clusters of servers to make them more fault tolerant. However, even clusters have limits on their scalability, and not all applications work in a clustered environment.
The Traditional Server Concept
Easy to conceptualize
Fairly easy to deploy
Easy to backup
Virtually any application/service can be run from this type of setup
Expensive to acquire and maintain hardware
Not very scalable
Difficult to replicate
Redundancy is difficult to implement
Vulnerable to hardware outages
In many cases, processor is under-utilized
The Virtual Server Concept
The Virtual Server Concept
Virtual servers seek to encapsulate the server software away from the hardware
This includes the OS, the applications, and the storage for that server.
Servers end up as mere files stored on a physical box, or in enterprise storage.
A virtual server can be serviced by one or more hosts, and one host may house more than one virtual server.
The Virtual Server Concept
Virtual servers can still be referred to by their function i.e. email server, database server, etc.
If the environment is built correctly, virtual servers will not be affected by the loss of a host.
Hosts may be removed and introduced almost at will to accommodate maintenance.
The Virtual Server Concept
Virtual servers can be scaled out easily.
If the administrators find that the resources supporting a virtual server are being taxed too much, they can adjust the amount of resources allocated to that virtual server
Server templates can be created in a virtual environment to be used to create multiple, identical virtual servers
Virtual servers themselves can be migrated from host to host almost at will.
The Virtual Server Concept
Rapidly deploy new servers
Easy to deploy
Reconfigurable while services are running
Optimizes physical resources by doing more with less
Slightly harder to conceptualize
Slightly more costly (must buy hardware, OS, Apps, and now the abstraction layer)
Offerings from many companies
e.g. VMware, Microsoft, Sun, ...
Fits well with the move to 64 bit (very large memories) multi-core (concurrency) processors.
Intel VT (Virtualization Technology) provides hardware to support the Virtual Machine Monitor layer
Virtualization is now a well-established technology
So what about Cloud Computing?
Suppose you are Forbes.com
You offer on-line real time stock market data
Why pay for capacity weekends, overnight?
Host the web site in Amazon's EC2 Elastic Compute Cloud
Provision new servers every day, and deprovision them every night
Pay just $0.10* per server per hour
* more for higher capacity servers
Let Amazon worry about the hardware!
Cloud computing takes virtualization to the next step
You don’t have to own the hardware
You “rent” it as needed from a cloud
There are public clouds
e.g. Amazon EC2, and now many others (Microsoft, IBM, Sun, and others ...)
A company can create a private one
With more control over security, etc.
Goal 1 – Cost Control
Many systems have variable demands
Batch processing (e.g. New York Times)
Web sites with peaks (e.g. Forbes)
Startups with unknown demand (e.g. the Cash for Clunkers program)
Don't need to buy hardware until you need it
Goal 2 - Business Agility
More than scalability - elasticity!
Ely Lilly in rapidly changing health care business
Used to take 3 - 4 months to give a department a server cluster, then they would hoard it!
Using EC2, about 5 minutes!
And they give it back when they are done!
Scaling back is as important as scaling up
Goal 3 - Stick to Our Business
Most companies don't WANT to do system administration
We are is a publishing company, not a software company
Do you really save much on sys admin?
You don't have the hardware, but you still need to manage the OS!
How Cloud Computing Works
Various providers let you create virtual servers
Set up an account, perhaps just with a credit card
You create virtual servers ("virtualization")
Choose the OS and software each "instance" will have
It will run on a large server farm located somewhere
You can instantiate more on a few minutes' notice
You can shut down instances in a minute or so
They send you a bill for what you use
Any Nasty Details?
(loads of them!)
How do I pick a provider?
Am I locked in to a provider?
Where do I put my data?
What happens to my data when I shut down?
How do I log in to my server?
How do I keep others from logging in (security)?
How do I get an IP address?
And One Really Important Caveat*
How come Amazon?
Grew out of efforts to manage Amazon’s own services
(Each time you get a page from Amazon, over a hundred servers are involved)
See reference Amazon Architecture on their service design concepts
They got so good at it that they launched Amazon Web Services (AWS) as a product
Cloud Computing Status
Seems to be rapidly becoming a mainstream practice
Amazon EC2 imitators ...
Just about every major industry name
IBM, Sun, Microsoft, ...
Major buzz at industry meetings
So What’s the Take-Away?
There seems to be a major revolution underway in how we manage hardware
Specify (machine per service or one big machine with many virtual servers
Purchase (own it yourself or rent from a public cloud)
Use (always-on, or flexible provisioning as needed ...)
We may need to rethink both our research and teaching
What About Research?
The Eucalyptus Project
From University of California Santa Barbara
An open source collection of tools to build your own cloud
Linux using Xen for virtualization
An apparently open research area: handling data
Regular databases apparently don't scale well
Especially hard to make elastic (scale up / scale down)
What About Teaching?
Our graduates should know about cloud computing / virtualization
It will be useful for some applications, though not for all
But what are the right learning objectives?
Awareness (its there ...)
Mechanics (here’s how to instantiate a server ...)
Design (how to make a scalable service ...)
For Fall 2009 ...
Currently developing a Virtualization / Cloud Computing “module”
1 – 2 class sessions plus an exercise
Target courses (November):
COP 6990 – Multi-Process Computing (Simmons)
CTS 4817 – Web Server Administration (Owsnicki-Klewe)
Awareness and mechanics of EC2
Active In SP
Joined: Feb 2011
17-03-2011, 10:45 AM
Though the concept of “clouds” is not new, it is undisputable that they have proven a major
commercial success over recent years and will play a large part in the ICT domain over the next 10
years or more, as future systems will exploit the capabilities of managed services and resource
provisioning further. Clouds are of particular commercial interest not only with the growing
tendency to outsource IT so as to reduce management overhead and to extend existing, limited IT
infrastructures, but even more importantly, they reduce the entrance barrier for new service
providers to offer their respective capabilities to a wide market with a minimum of entry costs and
infrastructure requirements – in fact, the special capabilities of cloud infrastructures allow providers
to experiment with novel service types whilst reducing the risk of wasting resources; .
Cloud systems are not to be misunderstood as just another form of resource provisioning
infrastructure and in fact, as this report shows, multiple opportunities arise from the principles for
cloud infrastructures that will enable further types of applications, reduced development and
provisioning time of different services. Cloud computing has particular characteristics that
distinguish it from classical resource and service provisioning environments:
(1) it is (more-or-less) infinitely scalable; (2) it provides one or more of an infrastructure for
platforms, a platform for applications or applications (via services) themselves; (3) thus clouds can be
used for every purpose from disaster recovery/business continuity through to a fully outsourced ICT
service for an organisation; (4) clouds shift the costs for a business opportunity from CAPEX to OPEX
which allows finer control of expenditure and avoids costly asset acquisition and maintenance
reducing the entry threshold barrier; (5) currently the major cloud providers had already invested in
large scale infrastructure and now offer a cloud service to exploit it; (6) as a consequence the cloud
offerings are heterogeneous and without agreed interfaces; (7) cloud providers essentially provide
datacentres for outsourcing; (8) there are concerns over security if a business places its valuable
knowledge, information and data on an external service; (9) there are concerns over availability and
business continuity – with some recent examples of failures; (10) there are concerns over data
shipping over anticipated broadband speeds.
The concept of cloud computing is linked intimately with those of IaaS (Infrastructure as a Service);
PaaS (Platform as a Service), SaaS (Software as a Service) and collectively *aaS (Everything as a
Service) all of which imply a service-oriented architecture.
Open Research Issues
CLOUD TECHNOLOGIES AND MODELS HAVE NOT YET REACHED THEIR FULL POTENTIAL AND MANY OF THE CAPABILITIES
ASSOCIATED WITH CLOUDS ARE NOT YET DEVELOPED AND RESEARCHED TO A DEGREE THAT ALLOWS THEIR EXPLOITATION
TO THE FULL DEGREE, RESPECTIVELY MEETING ALL REQUIREMENTS UNDER ALL POTENTIAL CIRCUMSTANCES OF USAGE.
Many aspects are still in an experimental stage where the long-term impact on provisioning and
usage is as yet unknown. Furthermore, plenty of as yet unforeseen challenges arise from exploiting
the cloud capabilities to their full potential, involving in particular aspects deriving from the large
degree of scalability and heterogeneity of the underlying resources. We can thereby distinguish
between technological gaps on the one hand, that need to be closed in order to realize cloud
infrastructures that fulfil the specific cloud characteristics and non-technological issues on the other
hand that in particular reduce uptake and viability of cloud systems:
To the technological aspects belong in particular issues related to (1) scale and elastic scalability,
which is not only currently restricted to horizontal scale out, but also inefficient as it tends to
resource over usage due to limited scale down capabilities and full replication of instances rather
than only of essential segments. (2) Trust, security and privacy always pose issues in any internet
provided service, but due to the specific nature of clouds, additional aspects related e.g. to multitenancy
arise and control over data location etc. arise. What is more, clouds simplify malicious use of
resources, e.g. for hacking purposes, but also for sensitive calculations (such as weapon design) etc.
(3) Handling data in clouds is still complicated - in particular as data size and diversity grows, pure
replication is no viable approach, leading to consistency and efficiency issues. Also, the lacking
control over data location and missing provenance poses security and legalistic issues. (4)
Programming models are currently not aligned to highly scalable applications and thus do not exploit
the capabilities of clouds, whilst they should also simplify development. Along the same line,
developers, providers and users should be able to control and restrict distribution and scaling
behaviour. This relates to (5) systems development and management which is currently still
executed mostly manually, thus contributing to substantial efficiency and bottleneck issues.
On the other hand, non-technological issues play a major role in realizing these technological aspects
and in ensuring viability of the infrastructures in the first instance. To these belong in particular (1)
economic aspects which cover knowledge about when, why, how to use which cloud system how
this impacts on the original infrastructure (provider) –long-term experience is lacking in all these
areas; and (2) legalistic issues which come as a consequence from the dynamic (location) handling of
the clouds, their scalability and the partially unclear legislative issues in the internet. This covers in
particular issues related to intellectual property rights and data protection. In addition, (3) aspects
related to green IT need to be elaborated further, as the cloud offers principally “green capabilities”
by reducing unnecessary power consumption, given that good scaling behaviour and good economic
models are in place.
Europe and Clouds
Notwithstanding common beliefs, clouds are not a phenomenon entirely imported from abroad. This
report will elaborate the main opportunities for European industry and research to be pursued with
respect to the specific capabilities and remaining gaps.
This document provides a detailed analysis of Europe’s position with respect to cloud provisioning,
and how this affects in particular future research and development in this area. The report is based
on a series of workshops involving experts from different areas related to cloud technologies.
In more detail, the identified opportunities are: (1) Provisioning and further development of Cloud
infrastructures, where in particular telecommunication companies are expected to provide offerings;
(2) Provisioning and advancing cloud platforms, which the telecommunication industry might see as
a business opportunity, as well as large IT companies with business in Europe and even large non-IT
businesses with hardware not fully utilised. (3) Enhanced service provisioning and development of
meta-services: Europe could and should develop a ‘free market for IT services’ to match those for
movement of goods, services, capital, and skills. Again telecommunication industry could
supplement their services as ISPs with extended cloud capabilities; (4) provision of consultancy to
assist businesses to migrate to, and utilise effectively, clouds. This implies also provision of a toolset
to assist in analysis and migration.
download full report
Active In SP
Joined: Feb 2011
19-03-2011, 04:22 PM
Prashanth C Parekh
cloud.pptx (Size: 1.65 MB / Downloads: 112)
• Cloud computing is the provision of dynamically scalable and often virtualized resources as a service over the Internet on a utility basis.
• Users need not have knowledge of, expertise in, or control over the technology infrastructure in the "cloud" that supports them.
• Cloud computing services often provide common business applications online that are accessed from a web browser, while the software and data are stored on the servers
• As we all know , for us to work on an app, we need to follow some complicated procedures like downloading, installation, blah blah… then finally we get to work on that app.
• Even while workin on that app we often tend to wonder about the new fetures and the upgradation part that takes lots of time and money.
• Every computing machine need to install its own OS for it to work.
• Current internet system is not effective in managing efficiency, time and costs simultaneously. ( In nerd’s terms like not scalable, not reliable etc. )
• The concept of Cloud computing dates back to 1960 when JOHN McCARTHY imagined that computation may some day be a public utility.
• Cloud computing is a natural evolution of the widespread adoption of virtualization, service-oriented architecture, autonomic and utility computing. Details are abstracted from end-users, who no longer have need for expertise in, or control over, the technology infrastructure "in the cloud" that supports them.
• System Requirements
STANDARD CONFIGURATED COMPUTER :
• Processor : Any Processor above 500 MHz.
• Ram : 1Gb.
• Hard Disk : 80 GB.
• Compact Disk : 650 Mb.
• Input device : Standard Keyboard and Mouse.
• Output device : VGA and High Resolution Monitor.
Server side : ( configurated computer with apache or tom cat app installed)
Client side : (Ordinary computer with only OS and browsing app installed)
Operating System : Windows 2000 server Family.
Language : .NET, C#, Ajax , PHP
Browser : Mozilla Firefox
Active In SP
Joined: Feb 2011
21-03-2011, 12:58 PM
Cloudppt.ppt (Size: 1.27 MB / Downloads: 157)
What is Cloud Computing?
“Cloud computing is a general term for anything that involves delivering hosted services over the internet.” – Wikipedia
“Cloud computing is Internet-based computing, whereby shared resources, software, and information are provided to computers and other devices on demand, like the electricity grid.”- Wikipedia
Basically a cloud is a virtualization of resources that manages and maintains itself.
• Enterprise Software today
• Cloud Computing Layers
• What is a Cloud?
Why Cloud computing?
• Traditional Software
• Software as a Service
• Cloud Computing
Broadly classified under the three categories:
1. IaaS: Infrastructure as a service.
2. PasS: Platform as a service.
3. SaaS: Application/Software as a Service.
Infrastructure as a Service (IaaS):
Cloud infrastructure services or "Infrastructure as a Service (IaaS)" delivers computer infrastructure, typically a platform virtualization environment as a service. Rather than purchasing servers, software, data center space or network equipment, clients instead buy those resources as a fully outsourced service.
Examples:- IBM Blue house, VMWare, Amazon EC2, Microsoft Azure Platform, Sun Parascale and more
Benefits to the clients:
u 1. Stop worrying about heavy traffic and bandwidth requirements.
u 2. Pay as you go.
u 3. No need to buy high configuration servers from day one.
u 4. Low maintenance.
Platform as a Service (PaaS):
Platform-as-a-service in the cloud is defined as a set of software and product development tools hosted on the provider’s infrastructure.
Developers create applications on the provider’s platform over the Internet. PaaS providers may use APIs, website portals or gateway software installed on the customer’s computer Force.com, (an outgrowth of Salesforce.com) and GoogleApps are examples of PaaS.
Developers need to know that currently, there are not standards for interoperability or data portability in the cloud. Some providers will not allow software created by their customers to be moved off the provider’s platform.
Examples:- Middleware, Intergation, Messaging, Information, connectivity etc
AWS, IBM Virtual images, Boomi, CastIron, Google Appengine
Software as a Service (SaaS):
In the software-as-a-service cloud model, the vendor supplies the hardware infrastructure, the software product and interacts with the user through a front-end portal. SaaS is a very broad market. Services
can be anything from Web-based email to inventory control and database processing. Because the service provider hosts both the application and the data, the end user Is free to use the service from
anywhere. There are several successful SaaS model running all over the web.
Payroll, HR, CRM etc
Sugarm CRM, IBM Lotus Live
Types Of Clouds
• Public Cloud: the services are delivered to the client via the Internet from a third party service provider.
• Private Cloud: these services are managed and provided within the organization. There are less restriction on network bandwidth, fewer security exposures and other legal requirements compared to the public Cloud.
Example: HP Data Centers
Cloud computing technology
• To say it simply, it is a price model based on resource usage quantity.
• Utility computing allows companies to only pay for the computing resources they need, when they need them.
• The main benefit of utility computing is better economics.
What cloud computing means to users?
• Lower client workload
• Lower Total Cost Ownership
• Separation of infrastructure maintenance duties from
• domain-specific application development
• Separation of application code from physical resources
• Not have to purchase assets for one-time or infrequent intensive computing tasks
• Expand resource on-demand
• Make the application have high availability
• Quickly deploy application
• Pay per use
• Cloud computing infrastructure
• Linearly Scalable
• Resource Monitor and measure
• Resource registration and discovery
• Difficulties for cloud computing
• Continuous high availability
• Interoperability and standardization
• Scalability of all components
• Data secrecy
• Legal and political problem of data store and
translation across regions
• Performance issue
• Difficulty customizing
• Organizational obstacle
Cloud computing products and market
• Market Opportunities
• Cloud Providers
• Cloud computing open source
project and implimentations
Infrastructure management project and implimentations :-
Enomalism, convirt, redhat genome, hyperVM.
lxlabs, LN, OpenNEbula.
Useful open source project and implimentations to build cloud platform-:
Kenso, hyperic, virt-P2V
Active In SP
Joined: Feb 2011
28-03-2011, 12:51 PM
cloudcomputingdemystified-100511041345-phpapp02.pptx (Size: 1.22 MB / Downloads: 78)
The Little Story of Cloud Computing
I’m Utility Computing
I package computing resources as a metered service
I’m Distributed Computing
I allow computations to run on several networked computers
Cloud Computing is
A distributed computation model which offers managed, scalable, secured, highly-available computation resources and software as a service
The Cloud abstracts the complexity of software.
The Cloud is the INTERNET
Infrastructure as a service (IaaS)
Primary enabler is Virtualization technology
You pay for processing time you ACTUALLY use
Amazon Elastic Computing Cloud (EC2)
Microsoft Azure Services Platform
Storage as a service
Scalable, Reliable and Highly Available
You pay for space and bandwidth you ACTUALLY use
Amazon Simple Storage Service (S3)
Microsoft Azure SQL Services
Microsoft Azure Simple Data Storage (SDS)
Solution stack to develop & deliver apps/services
Computing Platform as a service (PaaS)
Cloud Types: Instance & Fabric
Amazon Simple Queue Service
Microsoft Azure .NET Services
Microsoft Windows Azure (IIS7, .NET 3.5,
|Tagged Pages: seminar topics cloud computing, cloud computing in electronics pdf, latest seminar topics on cloud computing, email tierweb com loc es, cloud computing seminar topics, three tier web architecture sans, seminar topic on cloud computing,|