Search Interview Questions | ![]() ![]() Click here and help us by providing the answer. ![]() Click Correct / Improve and please let us know. |
|
| ||||
Server - Interview Questions and Answers for 'Production support' - 27 question(s) found - Order By Newest | ||||
| ||||
Ans. Firstly ,we have removed old backup data from server after that we are checking which user consume high memory through df command. | ||||
![]() | ||||
![]() ![]() ![]() ![]() | ||||
| ||||
Ans. In terms of compute options and configurations, Reserved Instances and On Demand instances are the same. The only difference between the two is that a Reserved Instance is one you rent (reserved) for a fixed duration, and in return you receive a discount on the base price of an On Demand instance. | ||||
![]() | ||||
![]() ![]() ![]() | ||||
| ||||
Ans. LDAP servers are typically used in J2EE applications to authenticate and authorise users. LDAP servers are hierarchical and are optimized for read access, so likely to be faster than database in providing read access. | ||||
![]() | ||||
![]() ![]() ![]() ![]() | ||||
| ||||
Ans. Nagios is open source system , network and infrastructure monitoring software application. It alerts the user if anything goes wrong. Nagios is widely used as monitoring tool for enterprise applications. | ||||
![]() | ||||
![]() ![]() ![]() | ||||
| ||||
Ans. We are using cluster of Web servers and Application servers. Load Balancer is used to manage the load between them. Down the layer we have middleware server and then DB server to access database. | ||||
![]() | ||||
![]() ![]() ![]() ![]() ![]() | ||||
| ||||
Ans. We use SAR command for that purpose. We also have GUI system monitoring tool to keep real time check of requests, load and memory usage. | ||||
![]() | ||||
![]() ![]() ![]() | ||||
| ||||
Ans. Yes , sometime we receive issues related to outdated pages being rendered to the user. In those cases we clear the cache and then try to investigate the reason for that. Sometime the issue is due to comparatively high refresh interval. In those cases we reduce the cache refresh interval. | ||||
![]() | ||||
![]() ![]() ![]() | ||||
| ||||
Ans. It is the ability to deploy changes on the fly without need to first build , deploy and then restart. All these functions happens on the fly as soon as the changes are made to the code. | ||||
![]() | ||||
![]() ![]() ![]() | ||||
| ||||
Ans. Cold Deployment is a conventional deployment mechanism that follows the multi step process to deploy code changes to the running app i.e Build -> Deploy - Restart. whereas Hot Deployment is deployment changes on the fly without need to first build , deploy and then restart. All these functions happens on the fly as soon as the changes are made to the code. | ||||
![]() | ||||
![]() ![]() ![]() | ||||
| ||||
Ans. a. Inform the stake holders that the issue is being worked upon. b. Login to server to see if its responding. c. Access Application and Web Server logs to see if the application is receiving requests. d. If not, Involve the appropriate Network Team. e. Inform the stakeholders regarding the progress. f. Bounce the web / application server instance , if required. g. Close the ticket with the steps taken to resolve the problem. h. Complete the RCA ( Root Cause Analysis ) and submit the report to stake holders. | ||||
![]() | ||||
![]() ![]() ![]() | ||||
| ||||
Ans. We use SAR command for that purpose. We also have GUI system monitoring tool to keep real time check of requests, load and memory usage. | ||||
![]() | ||||
![]() ![]() ![]() | ||||
| ||||
Ans. We try to look for errors in the last n minutes when the issue occurred. If the issue is still occurring intermittently, We tail the logs for different application server instances to see the error snippets coming in the live logs. | ||||
![]() | ||||
![]() ![]() ![]() | ||||
| ||||
Ans. Yes we have created System as well as Log monitoring scripts to keep track of exceptions. We are also using a tool that will inform the stake holders if an exceptional event occurs with the system. | ||||
![]() | ||||
![]() ![]() ![]() | ||||
| ||||
Ans. We are using Akamai as web server cache. | ||||
![]() | ||||
![]() ![]() ![]() | ||||
| ||||
Ans. Yes , sometime we receive issues related to outdated pages being rendered to the user. In those cases we clear the cache and then try to investigate the reason for that. Sometime the issue is due to comparatively high refresh interval. In those cases we reduce the cache refresh interval. | ||||
![]() | ||||
![]() ![]() ![]() | ||||
| ||||
Ans. We involve DBA and try to solve it through them. By the time they are solving it , we keep the stake holders informed regarding the progress. | ||||
![]() | ||||
![]() ![]() ![]() | ||||
| ||||
Ans. We inform the stakeholders regarding the resolution and steps taken for it. We updated the ticket notes and link it with the master / related tickets. RCA is done for the high priority and critical issues and a report is submitted. | ||||
![]() | ||||
![]() ![]() ![]() | ||||
| ||||
Ans. Load Balancing is a provision to improve performance by reducing load on a single machine / server. Traffic or load is distributed among different machines and hence resulting in better performance / response time by leveraging more resources. Failover is a provision to achieve better availability by switching to a backup server in case of failure. Load Balancing aims as improving performance whereas failover aims at improving availability. We can achieve both together by using a Load Balancing system which will isolate an instance in case of it's failure. | ||||
![]() | ||||
![]() ![]() ![]() | ||||
| ||||
Ans. It's a configuration for auto scaling environment that specify the attributes / conditions for scaling instances up and down. For example - We may like environment to scale up when the number of requests / sec increases a particular threshold and scale down when it decreases below a threshold. | ||||
![]() | ||||
![]() ![]() ![]() | ||||
| ||||
Ans. Manual scaling is the process of scaling instances up and down in a an enviornment manually by observing the scaling trigger whereas Auto scaling is the process wherein a Trigger condition specified as a scaling trigger will trigger scaling up and down of instances. for example - If an organization has a policy of scaling up if requests / sec / instance exceeds 10000, Operations team will have to manually monitor the metric and they can request scaling up and down as the need arise. Manual scaling offers flexibility as team can scale on the basis of combination of factors and can take decision on flyby requires continuous monitoring. The same trigger can be fed as triggering policy to the environment and can be taken care by the system. This requires availability of such triggering policy within the environment configuration but as it's automatically handled, requires no supervision. | ||||
![]() | ||||
![]() ![]() ![]() | ||||
| ||||
Ans. Primary benefits of cloud computing include – Data backup and storage Reduced costs of managing and maintaining IT systems Powerful server capability and scalability Better productivity and collaboration efficiency Access to automatic updates | ||||
![]() | ||||
![]() ![]() ![]() | ||||
| ||||
Ans. A Virtualization is software that virtualizes your hardware into multiple machines while Cloud computing is the combination of multiple hardware devices. In Virtualization, a user gets dedicated hardware while in Cloud computing multiple hardware devices provide one login environment for the user. | ||||
![]() | ||||
![]() ![]() ![]() | ||||
| ||||
Ans. A program that uses HTTP for serving files that create web pages for users in response to their requests that are sent by the HTTP clients of their computer is called as a web server. Web - Server Types Apache HTTP Server by the Apache Software Foundation Internet Information Service from Microsoft Sun Java System Web Server Jigsaw Server. | ||||
![]() | ||||
![]() ![]() ![]() ![]() | ||||
| ||||
Ans. We can provision more instances for that particular time slot. | ||||
![]() | ||||
![]() ![]() ![]() ![]() | ||||
| ||||
Ans. Following are key differences between containers and serverless: Supported host environments: Containers can run on any modern Linux server, as well as certain versions of Windows. In contrast, serverless runs on specific host platforms, most of which are based in the public cloud (like AWS Lambda or Azure Functions). Self-servicing ability. For the reasons just noted, in most cases, serverless functions require you to use a public cloud. (There are on-premises serverless frameworks, like Fn, but these are not yet widely used.) With containers, you can set up your own on-premises host environment, or use a public cloud service like ECS. Cost. Because serverless environments are hosted in the cloud, you have to pay to use them. In contrast, you can set up a container environment yourself using open source code for free (although you’ll still have management costs, of course). Supported languages. You can containerize an application written in any type of language as long as the host server supports the language. Serverless frameworks are different; they support a limited number of languages. The details of supported serverless language vary from serverless platform to platform. Statefulness. Most serverless platforms are designed to support stateless workloads. (Some serverless providers offer limited support for stateful services; cf. Durable Functions on Azure.) Although you can connect to external storage services (like S3 on AWS) from within serverless platforms, the functions themselves cannot be stateful. Containers present their own persistent storage challenges, but creating stateful containerized apps is certainly possible. Availability. Serverless functions are designed to run for a short period—usually, a few hundred seconds—before they shut down. Containers can run for as long as you need. | ||||
![]() | ||||
![]() ![]() ![]() | ||||
| ||||
![]() | ||||
![]() | ||||
![]() ![]() ![]() ![]() | ||||
| ||||
![]() | ||||
![]() | ||||
![]() ![]() ![]() ![]() | ||||