1. What is Azure Cloud Service?
The technology in which computer resources are provided as a service over the internet to end-users is termed as cloud computing. This can be core, memory, storage, networking or even software such as database, operating system, etc.
2. What are the advantages of using cloud computing?
The advantages of using cloud computing are
A cloud service role is comprised of application files and a configuration. A cloud service can have two types of role:
Web role / Web application:– A web role is a Web application which can be accessed via HTTP. A web role can be hosted as a subset of ASP.NET and Windows Communication Foundation (WCF) technologies.
Worker role / Worker process: – A worker role is a background processing application somewhat similar to a windows process. A worker's role does not communicate directly with external the world. In other words, it does not accept requests directly from the external world.
The technology in which computer resources are provided as a service over the internet to end-users is termed as cloud computing. This can be core, memory, storage, networking or even software such as database, operating system, etc.
2. What are the advantages of using cloud computing?
The advantages of using cloud computing are
- Data backup and storage of data
- Powerful server capabilities
- SaaS ( Software as a service)
- Information technology sandboxing capabilities
- Increase in productivity
- Cost-effective & Time saving
A cloud service role is comprised of application files and a configuration. A cloud service can have two types of role:
Web role / Web application:– A web role is a Web application which can be accessed via HTTP. A web role can be hosted as a subset of ASP.NET and Windows Communication Foundation (WCF) technologies.
Worker role / Worker process: – A worker role is a background processing application somewhat similar to a windows process. A worker's role does not communicate directly with external the world. In other words, it does not accept requests directly from the external world.
4. What is a link to a resource?
To show our cloud service’s dependencies on other resources, such as an Azure SQL Database instance, we can “link” the resource to the cloud service. In the Preview Management Portal, we can view linked resources on the Linked Resources page, view their status on the dashboard, and scale a linked SQL Database instance along with the service roles on the Scale page. Linking a resource in this sense does not connect the resource to the application; we must configure the connections in the application code.
5. What is scale a cloud service?
A cloud service is scaled out by increasing the number of role instances (virtual machines) deployed for a role. A cloud service is scaled in by decreasing role instances. In the Preview Management Portal, we can also scale a linked SQL Database instance, by changing the SQL Database edition and the maximum database size, when we scale our service roles.
6. What is a web role?
A web role provides dedicated Internet Information Services (IIS) web-server used for hosting front-end web applications.
7. What is a worker's role?
Applications hosted within worker roles can run asynchronous, long-running or perpetual tasks independent of user interaction or input.
8. What is a role instance?
A role instance is a virtual machine on which the application code and role configuration run. A role can have multiple instances, defined in the service configuration file.
9. What is a guest operating system?
The guest operating system for a cloud service is the operating system installed on the role instances (virtual machines) on which our application code runs.
10. What is the Service Model in Cloud Computing?
Cloud computing providers offer their services according to three fundamental models: Infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS) where IaaS is the most basic and each higher model abstracts from the details of the lower models.
Examples of IaaS include Amazon CloudFormation (and underlying services such as Amazon EC2), Rackspace Cloud, Terremark, Windows Azure Virtual Machines, Google Compute Engine. and Joyent.
Examples of PaaS include Amazon Elastic Beanstalk, Cloud Foundry, Heroku, Force.com, EngineYard, Mendix, Google App Engine, Windows Azure Compute and OrangeScape.
11. How many types of deployment models are used in the cloud?
There are 4 types of deployment models used in the cloud:
- Public cloud
- Private cloud
- Community cloud
- Hybrid cloud
A collective name of Microsoft’s Platform as a Service (PaaS) offering which provides a programming platform, a deployment vehicle, and a runtime the environment of cloud computing hosted in Microsoft datacenters.
13. What are the roles available in Windows Azure?
All three roles (web, worker, VM) are essentially Windows Server 2008. Web and Worker roles are nearly identical: With Web and Worker roles, the OS and related patches are taken care for you; you build your app's components without having to manage a VM
14. What is the difference between the Windows Azure Platform and Windows Azure?
The former is Microsoft’s PaaS offering including Windows Azure, SQL Azure, and AppFabric; while the latter is part of the offering and the Microsoft’s cloud OS.
15. What are the three main components of the Windows Azure Platform?
- Compute
- Storage
- AppFabric
16. What is Windows Azure compute emulator?
The compute emulator is a local emulator of Windows Azure that you can use to build and test your application before deploying it to Windows Azure.
The compute emulator is a local emulator of Windows Azure that you can use to build and test your application before deploying it to Windows Azure.
17. What is fabric?
In the Windows Azure cloud fabric is nothing but a combination of many visualized instances that run the client application.
18. How many instances of a Role should be deployed to satisfy Azure SLA (service level agreement)? And what's the benefit of Azure SLA?
TWO. And if we do so, the role would have external connectivity at least 99.95% of the time.
19. What is deployment environments?
Azure offers two deployment environments for cloud services: a staging environment in which we can test our deployment before we promote it to the production environment. The two environments are distinguished only by the virtual IP addresses (VIPs) by which the cloud service is accessed. In the staging environment, the cloud service’s globally unique identifier (GUID) identifies it in URLs (GUID.cloudapp.net). In the production environment, the URL is based on the friendlier DNS prefix assigned to the cloud service (for example, myservice.cloudapp.net).
20. What are the main types or categories of the cloud? Among those what all does Azure provide you.?
Three main categories:
21. What are the benefits of the cloud.?
Azure VM provides us with the entire virtual machine, where we can decide the OS, patches or updates, etc. It is just that we have control over our own machine, but it resides on MS datacenter.
Azure Instances comes within the cloud service, where we only worry about the application within it and other tasks such as OS, patching/updating etc will be taken care of by MS. Instances are more suited if you have a web application.
23. What are the role types of Azure cloud service instances?
A. Two roles are:
Web role – where the front end code of our application will reside. It has IIS running inside it.
Worker role – where the core code (for our service) runs. It doesn’t have IIS.
24. How the application is moved to the Azure cloud service?
Three main file types used are:
25. What is storage? What are the types in Azure.?
Storage normally means some space where the user can store data. Basically almost all cloud services provide you with 2 different classes of storage, one which termed as storage itself another with the name database.
Storage normally would be object-based storage where the user can store image, video or any such contents. Databases would be the area to store the data in a table-like structure.
In Azure, storage is again divided into 4 types:
- IAAS – Infrastructure As A Service
- PAAS – Platform As A Service
- SAAS – Software As A Service
21. What are the benefits of the cloud.?
- Pay for what we use – Pay for those services that we’ve used and not for everything. Pay per hour is the model that almost all cloud providers use, Azure plans to come up with Pay per minute option soon.
- Highly scalable – Create even 1000 servers within minutes.
- No Cap-Ex – Eliminates capital expenditure, only requires operational expenditure.
- Quick DR – Disaster Recovery can be very fast at a low cost.
Azure VM provides us with the entire virtual machine, where we can decide the OS, patches or updates, etc. It is just that we have control over our own machine, but it resides on MS datacenter.
Azure Instances comes within the cloud service, where we only worry about the application within it and other tasks such as OS, patching/updating etc will be taken care of by MS. Instances are more suited if you have a web application.
23. What are the role types of Azure cloud service instances?
A. Two roles are:
Web role – where the front end code of our application will reside. It has IIS running inside it.
Worker role – where the core code (for our service) runs. It doesn’t have IIS.
24. How the application is moved to the Azure cloud service?
Three main file types used are:
- .csdef – cs definition file defining service models as well as the number of roles
- .cscfg – cs configuration file defining configuration settings as well as the number of role instances
- .cspkg – cs package file, containing the application code as well as the csdef file
25. What is storage? What are the types in Azure.?
Storage normally means some space where the user can store data. Basically almost all cloud services provide you with 2 different classes of storage, one which termed as storage itself another with the name database.
Storage normally would be object-based storage where the user can store image, video or any such contents. Databases would be the area to store the data in a table-like structure.
In Azure, storage is again divided into 4 types:
- Blob – For storing unstructured data such as documents or media files.
- Queues – Messaging store for workflow processing.
- Tables – For structured no-SQL based data.
- Files – New service, shared storage for apps using the SMB protocol.
By default using the Azure management
portal, we can’t do this. There are some third-party tools likeCloudXplorer,
which will help us to extend our hard disk space. Or yes, if our hard disk is
small enough (in userspace) and if we have a good internet connection, we can
get the hard disk downloaded, extend using Hyper-V manager snap-in and upload
it to Azure.
27. How does Azure pricing differ for Azure VM hard disk and Azure Blob Storage.?
A. Azure VM hard disk is a .vhd file backed by a Page Blobs. You are allowed to create a hard disk of size 1 TB maximum (as of 14-May-2015), but MS costs you for how much data stored in that page blob. Which means even if you’ve 1 TB page blob and you store just 2 GB of data, then you’re charged for 2 GB rather then for 1 TB.
Azure blob storage uses Block Blobs. The maximum size of a block blob is 200 GB (as of 14-May-2015). Even if you store just 2 GB of data in your 5 GB allocated space, you’re charged for the entire 5 GB.
28. What is swap deployments?
To promote a deployment in the Azure staging environment to the production environment, we can “swap” the deployments by switching the VIPs by which the two deployments are accessed. After the deployment, the DNS name for the cloud service points to the deployment that had been in the staging environment.
29. What is minimal vs. verbose monitoring?
Minimal monitoring, which is configured by default for a cloud service, uses performance counters gathered from the host operating systems for role instances (virtual machines). Verbose monitoring gathers additional metrics based on performance data within the role instances to enable a closer analysis of issues that occur during application processing. For more information
30. What is a service definition file?
The cloud service definition file (.csdef) defines the service model, including the number of roles.
31. What is a service configuration file?
The cloud service configuration file (.cscfg) provides configuration settings for the cloud service and individual roles, including the number of roles instances.
32. What is a service package?
The service package (.cspkg) contains the application code and the service definition file.
33. What is a cloud service deployment?
A cloud service deployment is an instance of a cloud service deployed to the Azure staging or production environment. We can maintain deployments in both staging and production.
34. What is Azure Diagnostics?
Azure Diagnostics is the API that enables you to collect diagnostic data from applications running in Azure. Azure Diagnostics must be enabled for cloud service roles in order for verbose monitoring to be turned on. For more information.
35. What are the options to manage the session state in Windows Azure?
27. How does Azure pricing differ for Azure VM hard disk and Azure Blob Storage.?
A. Azure VM hard disk is a .vhd file backed by a Page Blobs. You are allowed to create a hard disk of size 1 TB maximum (as of 14-May-2015), but MS costs you for how much data stored in that page blob. Which means even if you’ve 1 TB page blob and you store just 2 GB of data, then you’re charged for 2 GB rather then for 1 TB.
Azure blob storage uses Block Blobs. The maximum size of a block blob is 200 GB (as of 14-May-2015). Even if you store just 2 GB of data in your 5 GB allocated space, you’re charged for the entire 5 GB.
28. What is swap deployments?
To promote a deployment in the Azure staging environment to the production environment, we can “swap” the deployments by switching the VIPs by which the two deployments are accessed. After the deployment, the DNS name for the cloud service points to the deployment that had been in the staging environment.
29. What is minimal vs. verbose monitoring?
Minimal monitoring, which is configured by default for a cloud service, uses performance counters gathered from the host operating systems for role instances (virtual machines). Verbose monitoring gathers additional metrics based on performance data within the role instances to enable a closer analysis of issues that occur during application processing. For more information
30. What is a service definition file?
The cloud service definition file (.csdef) defines the service model, including the number of roles.
31. What is a service configuration file?
The cloud service configuration file (.cscfg) provides configuration settings for the cloud service and individual roles, including the number of roles instances.
32. What is a service package?
The service package (.cspkg) contains the application code and the service definition file.
33. What is a cloud service deployment?
A cloud service deployment is an instance of a cloud service deployed to the Azure staging or production environment. We can maintain deployments in both staging and production.
34. What is Azure Diagnostics?
Azure Diagnostics is the API that enables you to collect diagnostic data from applications running in Azure. Azure Diagnostics must be enabled for cloud service roles in order for verbose monitoring to be turned on. For more information.
35. What are the options to manage the session state in Windows Azure?
- Windows Azure Caching
- SQL Azure
- Azure Table
36. What is cspack?
It is a command-line tool that generates a service package file (.cspkg) and prepares an application for deployment, either to Windows Azure or to the compute emulator.
It is a command-line tool that generates a service package file (.cspkg) and prepares an application for deployment, either to Windows Azure or to the compute emulator.
37. What is csrun?
It is a command-line tool that deploys a packaged application to the Windows Azure compute emulator and manages the running service.
38. What is guest OS?
It is the operating system that runs on the virtual machine that hosts an instance of a role.
40. What is guest OS?
It is the operating system that runs on the virtual machine that hosts an instance of a role.
42. How to programmatically scale-out Azure Worker Role instances?
Using AutoScaling Application
Block
44. What is the difference between Public Cloud and Private Cloud?
Public cloud is used as a service via the Internet by the users, whereas a private cloud, as the name conveys is deployed within certain boundaries like firewall settings and is completely managed and monitored by the users working on it in an organization.
45. How to design applications to handle connection failure in Windows Azure?
The Transient Fault Handling Application Block supports various standard ways of generating the retry delay time interval, including fixed interval, incremental interval (the interval increases by a standard amount), and exponential back-off (the interval doubles with some random variation).
static RetryPolicy policy = new RetryPolicy(5, TimeSpan.FromSeconds(2), TimeSpan.FromSeconds(2));
policy.ExecuteAction(()
=> {
try {
string federationCmdText =
@"USE FEDERATION Customer_Federation(ShardId =" + shardId +
")
WITH RESET,
FILTERING=ON";
customerEntity.Connection.Open();
customerEntity.ExecuteStoreCommand(federationCmdText);
}
catch (Exception e)
{
customerEntity.Connection.Close();
SqlConnection.ClearAllPools();
}
});
46. What is Windows Azure Diagnostics?
Windows Azure Diagnostics enables us to collect diagnostic data from an application running in Windows Azure. We can use diagnostic data for debugging and troubleshooting, measuring performance, monitoring resource usage, traffic analysis and capacity planning, and auditing. http://www.windowsazure.com/en-us/develop/net/common-tasks/diagnostics/
47. What is Blob?
BLOB stands for Binary Large Object. Blob is a file of any type and size.
The Azure Blob Storage offers two types of blobs –
1. Block Blob
2. Page Blob
URL format: Blobs are addressable using the following URL format:
http://.blob.core.windows.net//
49. What is the difference between Block Blob vs Page Blob?
Block blobs are comprised of blocks, each of which is identified by a block ID.
We create or modify a block blob by uploading a set of blocks and committing them by their block IDs.
If we are uploading a block blob that is no more than 64 MB in size, we can also upload it in its entirety with a single Put Blob operation. -Each block can be a maximum of 4 MB in size. The maximum size for a block blob in version 2009-09-19 is 200 GB, or up to 50,000 blocks.
Page blobs are a collection of pages. A page is a range of data that is identified by its offset from the start of the blob. To create a page blob, we initialize the page blob by calling Put Blob and specifying its maximum size.
-The maximum size for a page blob is 1 TB. A page is written to a page blob maybe up to 1 TB in size.
what to use block blobs for streaming video. "The application must provide random read/write access" which is supported by Page Blobs.
50. What is the difference between Windows Azure Queues and Windows Azure Service Bus Queues?
Windows Azure supports two types of queue mechanisms: Windows Azure Queues and Service Bus Queues.
Windows Azure Queues, which are part of the Windows Azure storage infrastructure, feature a simple REST-based Get/Put/Peek interface, providing reliable, persistent messaging within and between services.
Service Bus Queues are part of a broader Windows Azure messaging infrastructure that supports queuing as well as publish/subscribe, Web service remoting, and integration patterns.
http://wcfpro.wordpress.com/2010/12/06/communication-in-windows-azure/
http://msdn.microsoft.com/en-us/library/windowsazure/hh767287.aspx
51. What is dead letter queue?
Messages are placed on the dead letter sub-queue by the messaging system in the following scenarios.
· When a message expires and deadlettering for expired messages is set to true in a queue or subscription.
· When the max delivery count for a message is exceeded on a queue or subscription.
· When a filter evaluation exception occurs in a subscription and deadlettering is enabled on filter evaluation exceptions.
52. What are instance sizes of Azure?
Windows Azure will handle the load balancing for all of the instances that are created. The VM sizes are as follows:
Compute Instance Size CPU Memory Instance Storage I/O Performance
Extra Small 1.0 Ghz 768 MB 20 GB Low
Small 1.6 GHz 1.75 GB 225 GB Moderate
Medium 2 x 1.6 GHz 3.5 GB 490 GB High
Large 4 x 1.6 GHz 7 GB 1,000 GB High
Extra large 8 x 1.6 GHz 14 GB 2,040 GB High
53. What is table storage in Windows Azure?
The Windows Azure Table storage service stores large amounts of structured data.
The service is a NoSQL datastore which accepts authenticated calls from inside and outside the Windows Azure cloud.
Windows Azure tables are ideal for storing structured, non-relational data
Table: A table is a collection of entities. Tables don't enforce a schema on entities, which means a single table can contain entities that have different sets of properties. An account can contain many tables
Entity: An entity is a set of properties, similar to a database row. An entity can be up to 1MB in size.
Properties: A property is a name-value pair. Each entity can include up to 252 properties to store data. Each entity also has 3 system properties that specify a partition key, a row key, and a timestamp.
Entities with the same partition key can be queried more quickly, and inserted/updated in atomic operations. An entity's row key is its unique identifier within a partition.
http://msdn.microsoft.com/en-us/library/windowsazure/hh508997.aspx
54. What is Azure Service Level Agreement (SLA)?
The Azure Compute SLA guarantees that, when we deploy two or more role instances for every role, access to our cloud service will be maintained at least 99.95 percent of the time. Also, detection and corrective action will be initiated 99.9 percent of the time when a role instance’s process is not running.
55. What is two difference between SQL Azure and Azure Tables?
SQL Azure is an RDBMS and Azure Table is NoSQL
Azure Table (in a given storage account) can store up to 100 TB of data. Size limit of a single SQL Azure DB is 150 GB (as of April 2012) - but federations can be used with SQL Azure to implement a scale-out solution.
56. What is AutoScaling?
Scaling by adding additional instances is often referred to as scaling out. Windows Azure also supports scaling up by using larger role instances instead of more role instances.
By adding and removing role instances to our Windows Azure application while it is running, we can balance the performance of the application against its running costs.
An autoscaling solution reduces the amount of manual work involved in dynamically scaling an application.
57. What is SQL Azure?
SQL Azure is a cloud-based relational database as a Service offered by Microsoft. Conceptually it is the SQL server in the cloud.
58. What is cloud computing?
Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.
Note: Remember Key words On-Demand, Scalable, Self-service, and Measurable. Now take the first word from each keyword which gives us OSSM which can be pronounced as Awesome. Thus remember cloud computing is Awesome.
59. How is SQL Azure different than SQL server?
SQL Azure is a cloud-based service and so it has own set of pros and cons when compared to the SQL server. SQL Azure service benefits include on-demand provisioning, high availability, reduced management overhead and scalability. But SQL Azure abstracts some details from the subscriber which can be good or bad which depends on the context of the need.
60. How many replicas are maintained for each SQL Azure database?
For each database, three replicas are maintained for each database that one provisions. One of them is the primary replica. All read/write happen on primary replica and other replicas are kept in sync with the primary replica. If for some reason, primary goes down, another replica is promoted to primary. All this happens under the hood.
61. How can we migrate from SQL Server to SQL Azure?
For Data Migration, we can use BCP or SSIS. And for schema Migration, we can use Generate Script Wizard. Also, we could use a Tool called SQL Azure migration wizard available on CodePlex.
62. Which tools are available to manage SQL Azure databases and servers?
We can manage the SQL Azure database using the SQL server management server 2008 R2. Also, we can manage SQL Azure databases and servers through a Silverlight app integrated into the Azure management portal.
63. Tell me something about security and SQL Azure.
SQL Azure service allows blocking a request based on its IP address through SQL Azure firewall. It uses the SQL Server Authentication mechanism to authenticate connections. Also, connections to SQL Azure are SSL-encrypted by default.
64. What is SQL Azure Firewall?
SQL Azure firewall is a security mechanism that blocks requests based on its IP address.
65. What is the difference between web edition and business edition?
SQL Azure Web Edition database Max Size is 5 GB whereas the business edition supports Max Size up to 50 GB. The size of a web edition database can be increased (/decreased) in the increments (/decrements) of 1 GB whereas the size of a business edition can be increased in the increments of 10 GB.
68. How do we synchronize the On-Premise SQL server with SQL Azure?
We could use a No code solution called DATA SYNC (currently in community technology preview) to synchronize on-premise SQL server with SQL Azure. We can also develop custom solutions using the SYNC framework.
69. How do we Backup SQL Azure Data?
SQL Azure keeps three replicas of a database to tackle hardware-level issues. To tackle user-level errors, we can use the COPY command that allows us to create a replica of a SQL Azure database. We can also backup SQL Azure data to local SQL server using BCP, SSIS, etc. but as of now, point in time recovery is not supported.
70. What is the current pricing model of SQL Azure?
Charges for SQL Azure consumption is based on 1) Size 2) Data Transfer.
[For contemporary pricing model, read: http://www.microsoft.com/windowsazure/pricing/]
71. What is the current limitation of the size of SQL Azure DB?
The maximum size of a SQL Azure database is 50 GB.
72. How do you handle datasets larger than 50 GB?
As of now, we have to build a custom solution at the application level that can handle the scale-out of underlying SQL Azure databases. But Microsoft has announced, SQL Azure Federations that will assist scaling out of SQL Azure databases. And scale-out means that we are splitting the data into smaller subsets spread across multiple databases.
73. What happens when the SQL Azure database reaches Max Size?
Read operations continue to work but create/insert/update operations are throttled. We can drop/delete/truncate data.
74. How many databases can we create in a single server?
150 databases (including master database) can be created in a single SQL Azure server.
75. How many servers can we create in a single subscription?
As of now, we can create six servers under a single subscription.
76. How do you improve the performance of a SQL Azure Database?
We can tune a SQL Azure database using the information available from the execution plan and statistics of a query. We could use SQL Azure’s Dynamic Management views to monitor and manage the SQL Azure database.
Also, SQL Azure performance is affected by network latency and bandwidth. Considering this, code near application topology gives the best performance.
77. What is code near application topology?
Code near application topology means that the SQL Azure database and the windows azure hosted service consuming the data are hosted in the same Azure datacenter.
[FYI: in the code far application topology, the app connects to SQL Azure from outside the Microsoft data center]
78. What were the latest updates to the SQL Azure service?
Latest SQL Azure updates include multiple servers per subscription, SQL Azure co-administrator support, creating Firewall rules for servers with IP detect.
79. When does a workload on SQL Azure get throttled?
When the database reaches its maximum size update/insert/create operations get throttled. Also, there are policies in place that do not allow a workload to exploit a shared physical server. In other words, the policies make sure that all workload gets a fair share of the shared physical server. Now, a workload can get soft throttled which means that the workload has crossed the safety threshold. A workload can also get hard throttled which means that a SQL Azure machine is out of resources and it does not accept new connections. We can know more about what happened by decoding reason codes.
80.Mention platforms which are used for large scale cloud computing?
The platforms that are used for large scale cloud computing are
- Apache Hadoop
- MapReduc
The different deployment models in cloud computing are
- Private Cloud
- Public Cloud
- Community Cloud
- Hybrid Cloud
Mobile computing uses the same concept as cloud computing. Cloud computing becomes active with the data with the help of the internet rather than an individual device. It provides users with the data which they have to retrieve on demand. In mobile, the applications run on the remote server and give the user the access for storage and management.
83. How a user can gain from utility computing?
Utility computing allows the user to pay only for what they are using. It is a plug-in managed by an organization which decides what type of services has to be deployed from the cloud.
Note: Most organizations prefer a hybrid strategy.
84. For transport in the cloud how you can secure your data?
To secure our data while transporting them from one place to another, check that there is no leak with the encryption key implemented with the data we are sending.
85. What are the security aspects provided with the cloud?
- Identity management: It authorizes the application services
- Access control: permission has to be provided to the users so that they can control the access of another user who is entering into the cloud environment.
- Authentication and Authorization: Allows only the authorized and authenticated user only to access the data and applications
The different layers used by cloud architecture are
- CLC or Cloud Controller
- Walrus
- Cluster Controller
- SC or Storage Controller
- NC or Node Controller
In Cloud Computing, systems integrator provides the strategy of the complicated process used to design a cloud platform. Integrator allows creating more accurate hybrid and private cloud network, as integrators have all the knowledge about the data center creation.
88. What is “ EUCALYPTUS” stands for?
“ EUCALYPTUS” stands for Elastic Utility Computing Architecture For Linking your Programs To Useful Systems”
89. Explain what is the use of “EUCALYPTUS” in cloud computing?
“Eucalyptus” is an open-source software infrastructure in cloud computing, which is used to implement clusters in cloud computing platform. It is used to build public, hybrid and private clouds. It has the ability to produce our own data center into a private cloud and allows us to use its functionality to many other organizations.
90. What is the requirement of a virtualization platform in implementing the cloud?
The requirement of a virtualization platform in implementing cloud is to
- Manage the service level policies
- Cloud Operating System
- Virtualization platforms help to keep the backend level and user level concepts different from each other
- Compliance
- Loss of data
- Data storage
- Business continuity
- Uptime
- Data integrity in cloud computing
The open-source cloud computing platform databases are
- MongoDB
- CouchDB
- LucidDB
The security laws which are implemented to secure data in the cloud are
- Processing: Control the data that is being processed correctly and completely in an application
- File: It manages and controls the data being manipulated in any of the files
- Output reconciliation: It controls the data which has to be reconciled from input to output
- Input Validation: Control the input data
- Security and Backup: It provides security and backup it also controls the security breaches logs
- Google BigTable
- Amazon SimpleDB
- Cloud-based SQL
- The cost of the traditional data center is higher due to heating and hardware/software issues
- Cloud gets scaled when the demand increases. Majority of the expenses are spent on the maintenance of the data centers, while that is not the case with cloud computing
- Simple multi-tenancy: In this, each user has independent resources and are different from other users, it is an efficient mode.
- Fine-grain multi-tenancy: In this type, the resources can be shared by many but the functionality remains the same.
API’s ( Application Programming Interface) is very useful in cloud platforms
- It eliminates the need to write the fully-fledged programs
- It provides the instructions to make communication between one or more applications
- It allows easy creation of applications and links the cloud services with other systems
Cloud computing consists of different datacenters like
- Containerized Datacenters
- Low-Density Datacenters
The different layers of cloud computing are:
- SaaS: Software as a Service (SaaS), provides users access directly to the cloud application without installing anything on the system.
- IaaS: Infrastructure as a service, it provides the infrastructure in terms of hardware like memory, processor speed, etc.
- PaaS: Platform as a service, it provides cloud application platform for the developers
Platform as a service or PAAS is an important layer in cloud computing. It provides an application platform for providers. It is responsible for providing complete virtualization of the infrastructure layer and makes it work like a single server.
101. What is a cloud service?
Cloud service is used to build cloud applications using the server in a network through the internet. It provides the facility of using the cloud application without installing it on the computer. It also reduces the maintenance and support of the application which is developed using cloud service.
102. List down the three basic clouds in cloud computing?
- Professional cloud
- Personal cloud
- Performance cloud
IAAS ( Infrastructure As A Service) provides virtual and physical resources that are used to build a cloud. It deals with the complexities of deploying and maintaining the services provided by this layer. Here the infrastructure is the servers, storage, and other hardware systems.
104. What are the business benefits involved in cloud architecture?
The benefits involved in cloud architecture is
- Zero infrastructure investment
- Just in time infrastructure
- More efficient resource utilization
The characteristics that make cloud architecture above traditional architecture is:
- According to the demand, cloud architecture provides the hardware requirement
- Cloud architecture is capable of scaling the resource on demand
- Cloud architecture is capable of managing and handling dynamic workloads without failure
Scalability is the characteristics of cloud computing through which increasing the workload can be handled by increasing in proportion the amount of resource capacity. Whereas, elasticity, is being one of the characteristics that highlight the concept of commissioning and decommissioning of a large amount of resource capacity.
107. Mention the services that are provided by Window Azure Operating System?
Window Azure provides three core services which are given as
- Compute
- Storage
- Management
- Cloud Ingress
- Processor Speed
- Cloud storage services
- Cloud provided services
- Intra-cloud communications
- Launch Phase
- Monitor Phase
- Shutdown Phase
- Cleanup Phase
- Elasticity and Scalability
- Self-service provisioning and automatic de-provisioning
- Standardized interfaces
- Billing self-service based usage model
- Reference architecture
- Technical architecture
- Deployment operation architecture
To provide performance transparency and automation there are many tools used by cloud architecture. It allows managing the cloud architecture and monitor reports. It also allows them to share the application using cloud architecture. Automation is the key component of cloud architecture which helps to improve the degree of quality.
113. In cloud computing explain the role of performance cloud?
A performance cloud is useful in transferring the maximum amount of data instantly. It is used by the professionals who work on high-performance computing research.
114. Explain hybrid and community cloud?
Hybrid cloud: It consists of multiple service providers. It is a combination of public and private cloud features. It is used by the company when they require both private and public clouds.
Community Cloud: This model is quite expensive and is used when the organizations having common goals and requirements, and are ready to share the benefits of the cloud service.
115. In cloud what are the optimizing strategies?
To overcome the maintenance cost and to optimize the resources, there is a concept of three data center in the cloud which provides recovery and back-up in case of disaster or system failure and keeps all the data safe and intact.
116. What is Amazon SQS?
To communicate between different connectors Amazon SQS message is used, between various components of AMAZON, it acts as a communicator.
117. How buffer is used to Amazon web services?
In order to make the system more efficient against the burst of traffic or load, the buffer is used. It synchronizes different components. The component always receives and processes the request in an unbalanced way. The balance between different components are managed by a buffer, and makes them work at the same speed to provide faster services.
118. Mention what is Hypervisor in cloud computing and their types?
The hypervisor is a Virtual Machine Monitor which manages resources for virtual machines. There are mainly two types of hypervisors
Type 1: The guest Vm runs directly over the host hardware, eg Xen, VmWare ESXI
Type 2: The guest Vm runs over hardware through a host OS, eg Kvm, oracle virtualbox
119. How is SQL Database performance improving with the new service tiers?
The Premium service tier to support database workloads with higher-end throughput needs. We’re introducing new service tiers at lower price points (Basic & Standard), which are primarily differentiated on performance. As we move up the performance levels, the available throughput increases. This service design offers customers the opportunity to dial up the right set of resources to get the throughput their database requires.
120. What changes are being made to Premium?
Starting April 24, Azure SQL Database Premium preview introduces a new 500 GB max size, another performance level (P3), new business continuity features (active geo-replication and self-service restore), and streamlined provisioning and billing experience.
121. What new features are available in Premium?
Active Geo-Replication: Gain control over our disaster recovery process by creating up to four active, readable secondaries in any Azure region and choosing when to failover.
Self-service Restore: SQL Database Premium allows us to restore our database to any point in time within the last 35 days, in the case of a human or programmatic data deletion scenario. Replace import/export workarounds with self-service control over database restore.
Larger database size: Support for up to 500 GB databases is baked into the daily rate (no separate charge for DB size).
Additional Premium performance level: Meet high-end throughput needs with a new P3 performance level which delivers the greatest performance for our most demanding database workloads.
122. What are the performance levels?
Azure SQL Database service tiers introduce the concept of performance levels. There are six performance levels across the Basic, Standard, and Premium service tiers. The performance levels are Basic, S1, S2, P1, P2, and P3. Each performance level delivers a set of resources required to run light-weight to heavy-weight database workloads.
123. How does a customer think about the performance power available across the different performance levels?
As part of providing more predictable performance experience for customers, SQL Database is introducing the Database Throughput Unit (DTU). A DTU represents the power of the database engine as a blended measure of CPU, memory, and read and write rates. This measure helps a customer assess the relative power of the six SQL Database performance levels (Basic, S1, S2, P1, P2, and P3).
Performance levels offer the following DTUs:
Basic Standard Premium
Basic: 1 DTU S1: 5 DTU P1: 100 DTU
S2: 25 DTU P2: 200 DTU
P3: 800 DTU
124. How can a customer choose a performance level without hardware specs?
The on-premises and VM world have made machine specs the go-to option for assessing potential power a system can provide to database workloads. However, this approach doesn’t translate in the platform-as-a-service world were abstracting the underlying hardware and OS is inherent to the value proposition and overall customer benefit.
Customers assess performance needs for building cloud-designed applications by choosing a performance level, building the app, and then testing and tuning the app, as needed. The complicated task of assessing hardware specs is more critical in the on-premises world where the ability to scale up requires more careful consideration and calculation. With database-as-a-service, picking an option, then dialing up (or down) performance power is a simple task via an API or the Azure portal.
125. How can a customer view the utilization of the resources at a performance level?
Customers can monitor the percentage of available CPU, memory, and read and write IO that is being consumed over time. Initially, customers will not see memory consumption, but this will be added to the views during the course of preview.
126. What do we mean by a transaction rate per hour, per minute, and per second?
Each of the performance levels is designed to deliver increasingly higher throughput. By summarizing the throughput of each service tier as supporting transaction rates per-hour, per-minute, and per-second, it makes it easier for customers to quickly relate the capabilities of the service tier to the requirements of an application. Basic, for example, is designed for applications that measure activity in terms of transactions per hour. An example might be a single point-of-sale (POS) terminal in a bakery shop selling hundreds of items in an hour as the required throughput. Standard is designed for applications with throughput measured in terms of tens or hundreds of transactions per minute. Premium is designed for the most intense, mission-critical throughput, where support for many hundreds of concurrent transactions per second is required.
127. What if a customer needs to understand DTU power in more precise numbers, a language they understand?
For customers looking for a familiar reference point to assist in selecting an appropriate performance level, Microsoft is publishing OLTP benchmark numbers for each of the 6 performance levels (Basic, S1, S2, P1, P2, and P3). These published transaction rates are the output of an internal Microsoft cloud benchmark which mimics the database workload of a typical OLTP cloud application. As with all benchmarks, the published transaction rates are only a guide. Real-world databases are of different sizes and complexity, encounter different mixes of workloads, and will respond in different ways. Nonetheless, the published transaction rates will help customers understand the relative throughput of each performance level. The published Microsoft benchmark transaction rates are as follows, and the methodology paper is here.
Basic Standard Premium
Basic: 3,467/hour S1: 283/minute P1: 98/second
S2: 1,470/minute P2: 192/second
P3: 730/second
The car industry provides a great analogy when thinking about DTUs. While horsepower is used to measure the power of an engine, a sports car and a truck utilize this horsepower in very different ways to achieve different results. Likewise, databases will use DTU power to achieve different results, depending on the type of workload. The Microsoft benchmark numbers are based on a single defined OLTP workload (the sports car, for example). Customers will need to assess this for their unique workload needs.
Defining database power via a published benchmark is a cloud analog of TPC-C. TPC-C is the traditional industry-standard approach for defining the highest power potential of a database workload. Customers familiar with traditional databases and database systems will immediately understand the value and caveats associated with benchmark numbers. We have found newer startups and developers to be less familiar with the benchmarking industry. Instead, this group is more motivated to just build, test, and tune.
By offering customers both the benchmark-defined transaction rates and the ability to quickly build, try, and tune (scale up or down), we believe most customer performance assessment needs will be met.
128. Are the published transaction rates a throughput guarantee?
The Microsoft benchmark and associated transaction rates do not represent a transaction guarantee to customers. Notwithstanding the differences in customer workloads, customers cannot bank transactions for large bursts or roll transactions over seconds, minutes, hours, days, etc. The best way for customers to assess their actual performance needs is to view their actual resource usage in the Azure portal. Detailed views show usage over time as a percentage of the available CPU, memory, reads, and writes within their defined performance level.
129. Why doesn’t Microsoft just publish a benchmark on TPC, the industry-standard in database benchmarking?
Currently, TPC does not permit cloud providers to publish TPC benchmarks for database workloads. There is no other cloud vendor industry standard established at this time.
130. Will Microsoft publish the benchmark code for customers to run in their own environment?
Currently, there are no plans to publish the benchmark to customers. However, Microsoft will publish the methodology details of how the defined OLTP workload was run to achieve the published benchmark numbers.
4 comments:
Your information was very clear. Thank you for sharing.
SQL Azure Online Training
Azure SQL Training
SQL Azure Training
This post is really very helpful. Thanks for sharing.
Azure course
it seems you have put more effort to write this blog DevOps Training in Bangalore | Certification | Online Training Course institute | DevOps Training in Hyderabad | Certification | Online Training Course institute | DevOps Training in Coimbatore | Certification | Online Training Course institute | DevOps Online Training | Certification | Devops Training Online
Awesome blog Post AWS Training
Post a Comment