Modular Computing seminar or presentation report
Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
computer science crazy
Super Moderator

Posts: 3,048
Joined: Dec 2008
05-01-2010, 05:08 PM

.doc   Modular Computing seminar report.DOC (Size: 171.5 KB / Downloads: 369)

I Tâ„¢s Challenge
In the past three tears, the world has changed for information technology groups. In the late 1990s, the predominant problem was deploying equipment and software quickly enough to keep up with demand for computing. While the tech sector boomed on Wall Street, money was no object. IT budgets swelled and the numbers of computers in data centers grew exponentially.
Now, in the early 2000s, the picture is very different. IT budgets are flat down, yet business demand for IT services continues to escalate. This combination of more demand and constrained budgets has compelled IT groups to consider new approaches to IT infrastructure, approaches that offer more flexibility and lower cost of ownership.
The common theme is cost cutting. In todayâ„¢s world, profits come less easily than in 1990s. Competitors are more experienced, and competition is more intense. Corporations that trim costs while providing great service will prevail over those that canâ„¢t.
IT plays a major role in this competitive situation. As competition becomes more intense, so does the pressure on IT to cut costs and boost contribution. Now more than ever, large corporations are using their computing assets as tools to pull ahead of the competition.
The January 13, 2003, issue of Time Magazine provides a great example of how IT contributes in new ways. Executives at a big-box retailer were considering dropping a particular brand of chicken from the shelves because the sales volume was poor. Then the retailerâ„¢s data miners found that customers who bought that brand of chicken also bought large amounts of other merchandise. The chicken stayed.
Data mining, online transactions, and other new computing demands require collecting and processing enormous amounts of data. Still, IT departments are expected to keep up, even with budgets flat down. The bottom line is that IT will be doing more with less.
Modular Computing can slash costs in IT infrastructure. It enables IT groups to consolidate equipment, conserving expensive real estate. It offers the opportunity to migrate applications from expensive proprietary platforms to more, powerful, and manageable systems.
Winning through Modularity
As Janet Matsuda, SGI's director of Graphics Product Marketing, says: "Modularity offers both savings and scalability so that customers don't waste their money on what they don't want and can spend it on what they do want."
Debra Goldfarb, group vice president at analyst firm IDC, agrees: "Modular computing empowers end users to build the kind of environment that they need not only today but over time.
Doing More With Less
To keep up with computing demand while operating within restricted budgets, IT must find ways to optimally use computing resources and reduce people costs. There are many areas of improvement.
Cost of Over-Provisioning
As data centers have moved toward servers and away mainframes, IT has found that some mainframe capabilities werenâ„¢t available on servers. A glaring example is that smaller servers were unable to rapidly obtain more processing power to accommodate peaks in computing demand.
As applications became more transactional, for example with customers entering information via the Web, these peaks in computing demand became more visible. During peak demand, customers saw their transactions slow down. In situations where these transactions affect the bottom line, as when customers enter purchases, prompt processing becomes vital to the business.
As the number of customers using Web services has increased, the peaks in computing demand became more intense and more frequent. Consequently, customers more frequently saw declines in performance.
Many data centers have ensured responsiveness to business requirements by over-provisioning--proactively sizing computing resources in anticipation of peak demand. In the world of traditional servers and legacy mainframes, over-provisioning makes sense. In fact, many advisory firms once recommended over-provisioning as a means of meeting peak demand.
In the ideal, an alternative to over-provisioning is for IT to obtain additional resources and bring them online as they see demand increase. In practice, even after obtaining the hardware, setting it up and configuring the software can take weeks. Given the real-time nature of the changes in the computing demand, deployment takes too long, so IT began relying on over-provisioning.
Over-provisioning has its own disadvantages. It leaves costly resources idle most of the time. CPU utilization in many data centers range from 15 to 20 percent for non-mainframe servers, chiefly because of inability to rapidly reallocate unused resources during off-peak periods. Too much capital is tied up in under-utilized resources.
To reduce capital costs, IT needs an alternative to over-provisioning”a means of reallocating resources in minutes rather than in weeks to accommodate peaks in demand for an application.
Cost of High Availability
As transaction processing applications have become more common, more applications have been deemed mission-critical”capable of severely affecting the business when they slow down or stop running altogether. Hence the growing need for high availability.
However, high availability traditionally comes with a high price. Redundant equipment is expensive to buy, maintain, and manage. Additional software licenses, clustering software, and the professional services needed to implement a traditional configuration for high availability can cost more than the initial hardware. As a result, many IT groups continue to rely on expensive mainframes or RISC servers that use costly switched redundant connections to provide high availability.
Data centers need high availability, but they donâ„¢t need added expense. They need high availability on equipments that cost less, eliminate the need for extra software and professional services, and automate management.
Cost of Too Many People Doing Low-Level Tasks
Labor is the largest expense associated with IT. According to Giga Information Group, labor represents 46% of IT budgets. Finding a way to move administrators from low level tasks to more productive tasks would greatly improve an IT departmentâ„¢s ROI.
Excessive Server Management
Consider a data center with 1000 application servers. Each class of servers has its own management and provisioning process. To support these servers, IT needs experts for each class of server. In addition to their unique knowledge, these experts have many redundant skills. If server management could be simplified, many of these experts could be shifted to tasks with higher ROI than managing servers.

Excessive Deployment Expense
Installing and configuring hardware and software takes much more administrator time than one would expect. According to Giga Information Group, Management of most of large collections of servers is a manually intensive process. Highly automated management of servers, particularly the deployment of applications and operating systems images, is more the exception than the rule¦ Moving an application from one server to another is a delicate task requiring days for a skilled administrator.
Complex deployment also contributes to stranded resources.
Excessive Cable Management
A full rack of traditional servers can need over 200 cables to provide the redundant connections necessary for high availability. Such large numbers of cables complicate cable management. Giga Information Group says that, in large data centers that have many reconfigurations, system administrators can spend up to 25 percent of their time managing cables.
IT needs a means of spending less time on cables.
Cost of Stranded Resources
Closely related to over-provisioning is the dilemma that causes stranded resources. For example, suppose that demand for an application crests, then declines over a period of months. Three factors make data center management reluctant to harvest computing resources associated with the application:
¢ The cost of the administrative time spent removing the resource from the first application and reconfiguring it for the second.
¢ The risk of destabilizing the declining application.
¢ Removing some of the hardware used to process an application is complex. Without extreme attention to detail, it™s possible to cause the application to fail.
¢ The possibility that demand for the declining application may return after the resources have reassigned. Should demand return, another costly and risky harvest and reallocation would begin. The resulting stranded resources remain unused, prematurely forcing IT groups to buy equipment to deploy new applications and upgrade existing ones.

Many research firms have come to the conclusion that enterprise computing must change. Gartner Group envisions policy-based computing. Forrester Research envisions Organic IT. Giga Information Group envisions modularity and virtualization.
Intel is taking a leadership position in this movement. Intel expects Modular Computing to play a major role in enterprise computing.
A New Computing Paradigm
Modular Computing relies on a new paradigm for computers. Modular Computing draws elements from pools of computing resources-processing, storage, and networking. Together, these resources become virtual server, a computer that can be assigned to run one or more applications. However, unlike a traditional server, when demand for an application changes, virtual servers can dynamically be repurposed, in just minutes.
A virtual server is logically integrated rather than physically integrated. This distinction is essential for enabling potent management of the resources. A control module, running Modular Computing software, manages the creation of virtual servers and facilitates real-time allocation and deallocation of resources.
Processing Resource
Modular Computingâ„¢s processing resource is based on Intel Architecture (I A) processors because of their superior price for performance across all business and technical workloads. Intel server processors range from the 32-bit Intel Xeon processor MP with strong transaction and I/O processing capabilities, to the 64-bit Itanium processor family with high performance floating point execution. Because of their robust capabilities and price for performance, servers based on IA processors are very popular. Processing unit refers to the smallest chunk of processing power that can be deployed from the processing resource pool. For example, if the processing resource pool consists of 4-way SMPs, a processing unit is a 4-way SMP.
Storage Resource
For Modular Computing, the storage resource should be a Storage Area Network (SAN) or network-attached storage (NAS).Using SAN or NAS allows a computing facility to concentrate the storage in one physical location and obtain economies of scale. For example, mirroring, backup, and offsite archiving processes are much more cost effective on a SAN or with NAS than when applied to directly attached storage.
In addition, SAN or NAS allows the serverâ„¢s personality (operating system, application and data) to be defined completely by the content of storage. The processing resource can be diskless and anonymous. This allows any processing unit to be assigned to any application, facilitating the dynamic nature of logical, rather than physical connections. If storage were directly attached to the processing resource, the personality would follow the processing resource, making it less suitable for use with a different application. If a virtual server consumes its storage resource, the Modular Computing software automatically allocates another unit of storage to the server.
Networking Resource
The networking resources should be a high-speed network accessed through a high-speed switch. This should provide access to both the LAN and, if needed, the internet. Just as storage resources are flexibly allocated to meet computing demand, networking resources must be scalable so bandwidth does not hinder performance.

Modular Computing Software
The Modular Computing software (MCS) is a vital part of Modular Computing. It obtains resources resource pools and aggregates them into virtual servers. It also provides an interface for administrators. Running on a control module, it can oversee several virtual servers. The MCS also monitors the health of each virtual server, allocates replacement, and then informs the administrator about the status of the failed resource.
Fewer cables to manage
To facilitate expansion and maintenance, the processing and networking resources, along with the control modules, could be mounted in the same rack If this rack provides a high speed interconnect, it can reduce the number of cables from more than two hundred to a mere handful. Two cables from redundant switches replace the NIC cables for all servers. All the virtual servers access the storage resource through just two cables. And a forest of KVM cables is eliminated by providing an administrator interface across the network.

Modular Computing increases agility, while reducing equipment and people costs.
Increased Agility
Changes in computing demand need no longer cause panic. The Modular Computing software (MCS) can monitor the status of virtual servers in real time. As demand for an application changes, the MCS can adjust the number of virtual servers to match, in minutes instead of weeks. This real-time load balancing prevents applications from slowing down for long periods. The users of the applications donâ„¢t suffer lengthy response times associated with overloaded servers.
Equipment failure no longer takes applications offline. When the MCS detects a failure in the equipment allocated to a virtual server, the MCS logically swaps out the failed equipment, replacing it with resources from the pool within minutes. Applications keep running.
Because this failover capability is automatic and fast, it enables administrators to extend failover coverage beyond mission-critical applications to all applications running in the Modular Computing environment.
Suppose demand grows for many applications, threatening to regularly consume all of one of the resource pools. Rather than purchasing an extensive, traditional server, IT purchases only the resources needed (processing units, storage units, or network capacity) and adds them to the resource pools. The MCS takes care of deployment in minutes as demand fluctuates.
Reduced People Costs
With traditional, physically integrated servers, equipment failure often means an administrator needs to visit the rack immediately to make replacements. Each such visit is time consuming and costly. Rack visits become rare with Modular Computing.
The Modular Computing software (MCS) acts automatically. It uses parameters set by administrators to govern resource distribution. Once the administrator has set the parameters, the software can balance loads or invoke failover procedures without human intervention, in minutes.
In addition, because adding resources to a Modular Computing environment is so easy, substantially less administrator time is spent on configuration and setup.
Management, too, becomes easier. All applications running in a Modular Computing environment are monitored by the Modular Computing software. Compare this to a collection of disparate, physically integrated servers, where each server class needs unique management tools. Reducing the number of management tools means fewer specialized experts.
Consequently, IT management can move people from administrative duties to activities with higher ROI, such as planning or application development.
Reduced Equipment Costs
All applications running in a Modular Computing environment share the same resource pools. In other words, the entire collection of virtual servers draws load- balancing or failover resources from the same resource pools. In contrast, with traditional computing, each mission-critical application needs spare equipment standing by for failover or load balancing.
With Modular Computing, a little spare resource protects all applications. Because less resource can do the job, utilization of resources is higher.
A related benefit of Modular Computing is the absence of stranded resources. The MCS harvests under-utilized resources automatically.
Modular Computing helps IT do more with less. By increasing utilization of computing resources, Modular Computing holds down capital expenditures. By freeing administrators from tasks such as load balancing and deployment of hardware, it makes them available for other tasks, with higher ROI.

The Egenera BladeFrame system
The Egenera BladeFrame system consists of Modular Computing software, connections for SAN or NAS and IP networking, and as many as 24 virtual servers based on Intel processors.
The BladeFrame provides a pool of up to 96 Intel processors, deployable entirely through software, with no physical intervention. The system components are listed in the following table.
Components and Description
Processing Blade: 2-way or 4-way, diskless, symmetric multiprocessors (SMP5) using Intel processors. Each virtual server uses one Processing Blade. The BladeFrame system can contain as many as 24 Processing Blades.
Control Blade: This is the control module for the BladeFrame system. It runs the Modular Computing software and provides security for the Processing Blades. To ensure high availability, each BladeFrame system has two Control Blades.
Switch Blade: This is the networking resource for the BladeFrame system. It provides communication with the SAN or NAS and the IP network. To ensure high availability, each BladeFrame system has two Switch Blades.
BladePlane: High-speed interconnect. Enables communication between components within the BladeFrame system.
PAN Manager: Modular Computing software (MCS) to configure virtual servers and govern failover and load balancing. Administrators can use the browser-based interface or can write scripts to provide control through a command- line interface.
The system resides in a 24x30x84-inch chassis containing a redundant BladePlane, two Control Blades, two Switch Blades, and up to 24 Processing Blades. The BladeFrame system is a processing resource for the data center. The Processing Blades are diskless, accessing the data centerâ„¢s storage area network (SAN) or network attached storage (NAS) for storage resources, software, and data.
Separating processing resource from storage lets the processing resource remain anonymous”not permanently dedicated to any particular application(s). Anonymity facilitates reallocating Processing Blades between the processing resource pool and virtual servers. Egenera calls this diskless architecture a Processing Area Network, or PAN, and the management software is called PAN Manager.
This PAN architecture facilities use of processing resources. As demand for a particular application declines, PAN Manager software reduces the number of virtual servers assigned to that application, making their resources available for other applications. PAN Manager shifts resources automatically, in minutes. By rapidly distributing resources to where they are needed, PAN architecture eliminates costly over-provisioning. Should a piece of equipment fail, PAN Manager detects the failure, notifies the administrator, and allocates a replacement resource, all within minutes.
The BladeFrame system greatly reduces cable count. With traditional architecture, each single-processor server can require more than five cables, without providing redundancy. With the BladeFrame, as many as 96 Intel processors can be redundantly connected to the storage and IP networks with as few as four cables. This huge reduction in cables saves many error-prone hours during installation, while offering fewer failure points and increased density of servers. By reducing cable count, the BladeFrame contributes to higher reliability (because of fewer failure points), more efficient use of administrators (by saving cabling time), and less stranded equipment (by simplifying harvesting and redeployment).
Many benefits of the BladeFrame system derive from the Egenera PAN Manager software, which provides a single control point for monitoring and allocating both physical and logical resources. Using PAN Manager software, administrators can rapidly adjust logical configurations to service changing demand. Tasks that were once physical and required weeks are now accomplished through software in minutes.
The hardware and software modules of the BladeFrame system work together to provide automation and rapid, flexible deployment. The BladeFrame system saves administrator time associated with cable management and other deployment issues. It automates harvesting and reassigning resources, while slashing the cost of high availability.
NUMAflex by SGI
SGI (NYSE: SGI), known worldwide for providing a broad range of high-performance computing and advanced graphics solutions, today announced a technology that promises to help break the "digital ceiling"--the performance limits that block progress in the rapidly evolving digital economy and crucial efforts in medicine, science, manufacturing and media. The modular, brick-style technology--called NUMAflexTM-also stands to revolutionize the way people buy high-performance computers, allowing them to expand and upgrade only the elements they need for their systems or add new technologies as they become available. Traditionally, users have had to buy expensive "one size fits all" systems that either were too much for their needs or became obsolete quickly and had to be replaced--a costly and cumbersome process.
"This truly is a milestone for the industry and for SGI," said Bob Bishop, chairman and CEO, SGI. "Not only does this new technology stand to change the way advanced computer systems are built and used, but its flexible, cost-effective design means that more complex problems than ever before can have access to the power of supercomputers."
The first SGI® products to utilize NUMAflex technology--the SGI® Origin® 3000 series of servers and the SGI® Onyx® 3000 series of visualization systems-are available immediately. A large number of orders have been placed by notable clients such as the U.S. Army Engineering Research Development Center and NASA/Ames Research Center. These companies have needs for solving such demanding problems such as financial analytics, crash-test simulation and aircraft design. In addition, Sony Computer Entertainment Inc. has selected the SGI Origin 3400 as the broadband server for a next-generation entertainment demonstration at SIGGRAPH 2000.
About the Technology
With NUMAflex technology, each drawer-like module in a system has a specific function and can be linked, through the patented SGI high-speed system interconnect, to many other bricks of varying types to create a fully customized configuration. The same bricks, depending on their number or configuration, can be used for a continually expanding range of high-performance computing needs: C-brick (CPU module), P-brick (PCI expansion), D-brick (disk storage), R-brick (system/memory interconnect), I-brick (base I/O module), X-brick (XIO expansion) and G-brick (InfiniteReality® graphics). New brick types will be added to the NUMAflex modular offering for specialized configurations (e.g., broadband data streaming) and as new technologies, such as PCI-X and Infiniband, enter the market. The systems can also be deployed in clusters or as large shared-memory systems, depending on users' needs.
Without this modular approach, conventional high-performance systems often need to be replaced as often as once a year to keep up with changing needs, new technology or competitive pressures--at a cost potentially in the millions of dollars for each replacement. This daunting prospect can limit the progress of research and development and can hold industries and scientific pursuits back.
"This technology represents a real revolution in thinking," said Jan Silverman, vice president, Advanced Systems Marketing, SGI. "It's analogous to when people switched from all-in-one stereo systems at home to buying components for a home-based theater. Before, you had to throw out the whole stereo because 8-track died; now you just add the DVD."
Executing on SGI's Corporate Strategy
From its inception, SGI has accepted the challenge of the technical and creative user communities, working to provide them with the most advanced computational tools. The new SGI® 3000 family is a bold and dynamic example of the company's promise to serve these users with industry-leading, dependable products and services that are second to none for keeping them ahead of the technology curve and ahead of the competition.
"SGI's customers--technical and creative computer users-are continually demanding new products and solutions to help them reach new heights in their own work," said Bishop. "NUMAflex modular computing is just the latest success in our effort to meet the needs of these customers and to help them--and SGI--stay ahead of the competition."
The new family of SGI® Origin® 3000 series servers and SGI® Onyx® 3000 series graphics systems makes real the long-held dream of truly modular computing. Now, technical and creative computer users can have the same modularity, freedom of choice, and ease of upgrade that people have long benefited from in assembling and enhancing their home-entertainment centers. In unprecedented fashion, SGI delivers on the promise of superior performance, custom configuration, resiliency, and investment protection.
As Ben Passarelli, SGI's director of Server Product Marketing, says, "With modular computing, customers can buy precisely what they need, when they need it, with the confidence that they will be able to add the late-breaking technologies of the future to what they already have."
A Superior Architecture
The newly announced SGI® 3000 family of systems marks the return of the company to its time-honored leadership position in the realm of technical and creative computing. The basis for the 3000 family is NUMAflexTM technology, SGI's innovative and flexible use of a superior supercomputer architecture.
As an architecture for high-performance multiprocessor computers, SGI® NUMA (nonuniform memory access) exceeds the capabilities of the SMP (symmetric multiprocessing) architecture used in previous generations of supercomputers. SGI NUMA makes it possible for systems to increase shared memory as needed to meet the demands of CPU-to-memory bandwidth whenever additional processors are added to a configuration. Growing out of a joint project and implimentation with Stanford University that began more than 10 years ago, SGI NUMA gives technical and creative users superior scalability and performance. SGI is the only computer manufacturer capable of offering third-generation NUMA architecture, leveraging the company's long expertise in leading-edge computing.
NUMAflex technology takes advantage of the architecture through modular bricks that add specialized capacities in graphics, central processing, storage, PCI expansion, or I/O capacity. Even the internal interconnect is modular, so that large installations can be built from small ones, one brick at a time.
Winning through Modularity
NUMAflex technology gives technical and creative customers choices and growth paths never before available. As Janet Matsuda, SGI's director of Graphics Product Marketing, says: "Modularity offers both savings and scalability so that customers don't waste their money on what they don't want and can spend it on what they do want."
Debra Goldfarb, group vice president at analyst firm IDC, agrees: "Modular computing empowers end users to build the kind of environment that they need not only today but over time. SGI, with this product, is really ahead of the curve in the market. We are seeing the [rest of the] industry absolutely trying to catch up" with SGI.
In addition, SGI Origin 3000 servers and SGI Onyx 3000 visualization systems reflect a return to SGI's core competencies.
"It is very exciting for us to see that SGI is once again really becoming true to the mission it had years ago, that of leading the industry in technical computing, " says Goldfarb. "This company has really hit it this time and [we] believe this is really the right technology at the right point in time."
The Power of Visualization
Of course visualization, along with data handling and scalable architecture, has always been one of SGI's three main core competencies. The new SGI Onyx 3000 series, which utilizes next-generation InfiniteReality3TM graphics, will be able to aid users in what Matsuda calls "their need to understand." Says Matsuda, "You can get powerful visualization with powerful computing, because your eyes are the widest channel to the brain. And sometimes you need to give people experiences you don't want them to have in real life."
A unique feature of InfiniteReality3 is its ability to perform visual serving, delivering powerful graphics capabilities over a network as needed. The new SGI Onyx 3000 series systems are also optimized for real-time simulation, such as in planetariums, Reality Center® facilities, digital media and geospatial imaging.
A final component of SGI's renewed focus on its customers and what Passarelli calls "working to our strengths" is SGI's continuing strong commitment to both MIPS® and IRIX®, which is evidenced by unprecedented customer demand for the new product line. While SGI sees long-term strategic value in the company's involvement with the Open Source community, "We remain fanatically committed to helping our customers solve their problems in the here and now. For customers on the leading edge, if you give them more capabilities, more compute power, and greater visualization, they can do amazing things."

Call it grid computing. Or modular computing. Or policy-based computing or utility computing. Intel, which is opting for the modular designation, is preaching distribution of processing power to boost performance and reliability. Modular computing represents a new paradigm that requires advances in both software and hardware, according to Intel. "There (are) a lot of people that associate modular computing (with) blades and blade form factors. It's important to know this is far more than form factors and far more than blades," said Abbi Talwalkar, vice president of the Intel platform products group, in Hillsboro, Ore., during a presentation at the Intel Developer Forum.
Modular computing, the joining of multiple computing resources, is an answer for exponential data growth, application and server sprawl, and dis- aggregation of storage, according to Intel. The concept also is critical in today's tough economic times, with IT cutbacks, Talwalkar said. Modular computing is characterized by a growth in hardware clustering and distributed computing along with software developments such as the deployment of application servers and the use of Web services for intersystem communication, he said. "It's really advances in system management and clustering technology that's going to drive much of the adoption here," Talwalkar said. Clustering might displace large symmetric multiprocessing systems over time, he said. Automation, enabling for dynamic allocation of resources, is probably the "heart" of modular computing, according to Talwalkar. Automation developments are needed such as self-healing systems, failover, and dynamic performance optimization, he said.
Benefits of modular computing include maximization, efficiency, Internet reliability, and seamless and simplified management, according to the company. For example, modular computing will maximize use of a server that might have 40 percent of its capacity not being used, Talwalkar said. "Software is going to drive the success of modular computing 100 percent," Talwalker stressed.
One IDF attendee, however, criticized Intel for recently backing away from plans to produce InfiniBand-based hardware. InfiniBand, said Anil Vasudeva, president and CEO of research firm Imex Research, of San Jose, is key to making blade servers function together. InfiniBand is a next-generation switched-fabric I/O technology. "Intel seems to have done a big boo boo job on that," Vasudeva said. Talwalkar said that given current economics, there were "some very difficult decisions to make at Intel in terms of productizing components."

Stretching the IT Dollar
Modular Computing replaces the physical connections between computing resources with logical connections. Because the connections are logical Modular Computing software can monitor and control how virtual servers use resources.
This software-based monitoring and controlling enables automated resource management, where the software continuously redistributes resources according to parameters provided by an administrator. This, along with simplified server management and reduced cable count, means large collections of servers need fewer administrators.
Modular Computing uses small amounts of spare resource to provide failover and load balancing for all applications running in the Modular Computing environment. This eliminates stranded resources, boosts resource utilization, and holds down capital expense.
Because Modular Computing is built on IA processors, it offers better price for performance and ensures a broader choice of software vendors and software. A larger selection of software can speed application development, and competition between software vendors can hold down development costs.
Modular Computing is a concept for the future, hut it is available now, in products shipping today. It is already proving itself by saving money for IT.

¢ Forrester Research, Inc., The New Computing Utility.
¢ Goldman Sachs, IT Spending Survey.
¢ Giga Information Group, Inc., The Future of the Data Center-Modularity and Virtualization.
¢ Gartner, Inc., Budgeting for IT-Average Spending Report.


The modular, brick-style technology stands to revolutionize the way people buy high-performance computers, allowing them to expand and upgrade only the elements they need for their systems or add new technologies as they become available. Traditionally, users have had to buy expensive "one size fits all" systems that either were too much for their needs or became obsolete quickly and had to be replaced--a costly and cumbersome process.
Without this modular approach, conventional high-performance systems often need to be replaced as often as once a year to keep up with changing needs, new technology or competitive pressures--at a cost potentially in the millions of dollars for each replacement. This daunting prospect can limit the progress of research and development and can hold industries and scientific pursuits back.
Now, technical and creative computer users can have the same modularity, freedom of choice, and ease of upgrade that people have long benefited from in assembling and enhancing their home-entertainment centers


I express my sincere thanks to Prof. M.N Agnisarman Namboothiri (Head of the Department, Computer Science and Engineering, MESCE), Mr. Sminesh (Staff incharge) for their kind co-operation for presenting the seminar and presentation.
I also extend my sincere thanks to all other members of the faculty of Computer Science and Engineering Department and my friends for their co-operation and encouragement.
Use Search at wisely To Get Information About Project Topic and Seminar ideas with report/source code along pdf and ppt presenaion
Active In SP

Posts: 6
Joined: Oct 2011
07-10-2011, 09:34 PM

Hello Sir,
Can you please send me the complete seminar and presentation report with the presentation on MODULAR COMPUTING for my technical seminar and presentation
Please its very urgent
my id :-
seminar addict
Super Moderator

Posts: 6,592
Joined: Jul 2011
08-10-2011, 09:59 AM

to get information about the topic"Modular Computing seminar and presentation report" please refer the link bellow and presentation-report?pid=57496#pid57496
seminar addict
Super Moderator

Posts: 6,592
Joined: Jul 2011
27-01-2012, 09:45 AM

to get information about the topic modular computing full report,ppt and related topic please refer the link bellow and presentation-report

Important Note..!

If you are not satisfied with above reply ,..Please


So that we will collect data for you and will made reply to the request....OR try below "QUICK REPLY" box to add a reply to this page
Tagged Pages: report on modular computing in pdf, modular computing ppt, modular computing on seminar project, modular computing, tenical seminor for modular computing ppt, seminar report on intel plans to create virtual cable service, pdf of module 1 of the high performance computing,
Popular Searches: modular computing ppt, powered by mybb plans entertainment center, moduler computing ppt, seminar report on server virtualization, emergency panic buttons, modular computing projects, modular computing solutions,

Quick Reply
Type your reply to this message here.

Image Verification
Please enter the text contained within the image into the text box below it. This process is used to prevent automated spam bots.
Image Verification
(case insensitive)

Possibly Related Threads...
Thread Author Replies Views Last Post
  REDTACTON A SEMINAR REPORT project girl 2 515 25-04-2016, 03:58 PM
Last Post: mkaasees
  vowifi presentation jaseelati 0 194 02-03-2015, 01:29 PM
Last Post: jaseelati
  seminar report on cyber terrorism pdf jaseelati 0 297 23-02-2015, 01:49 PM
Last Post: jaseelati
  abstract cybercrime and security paper presentation jaseelati 0 268 13-02-2015, 02:01 PM
Last Post: jaseelati
  seminar report on internet of things jaseelati 0 346 29-01-2015, 04:51 PM
Last Post: jaseelati
  nano ic engine seminar report jaseelati 0 294 21-01-2015, 01:43 PM
Last Post: jaseelati
  google glass seminar report pdf jaseelati 0 316 21-01-2015, 01:41 PM
Last Post: jaseelati
  rolltop laptop seminar report jaseelati 0 263 17-01-2015, 03:15 PM
Last Post: jaseelati
  bicmos technology seminar report jaseelati 0 310 09-01-2015, 02:58 PM
Last Post: jaseelati
  3d optical data storage technology seminar report jaseelati 0 393 06-01-2015, 04:47 PM
Last Post: jaseelati