AI/ML Screening TestBy Vikas Sharma / 19/08/2025 AI/ML Screening Test 1 / 49 Which CI/CD tool is widely integrated with MLOps pipelines? Photoshop Jenkins Final Cut Pro MS Word 2 / 49 What is a key advantage of using AIOps in incident management? Replacing monitoring tools entirely Proactive anomaly detection and root cause analysis Increased number of false alerts Manual intervention for faster resolutions 3 / 49 . What does a feature store provide in MLOps? A CI/CD orchestrator A centralized repository for storing and sharing ML features A code versioning platform A monitoring dashboard 4 / 49 What is the role of Kubernetes in MLOps pipelines Hyperparameter tuning only Model evaluation Scaling and orchestrating ML workloads in production Data preprocessing 5 / 49 What is the purpose of MLflow in MLOps? Log analysis Container orchestration Database sharding Experiment tracking, model registry, and deployment 6 / 49 Which is a key output of anomaly detection in AIOps? Optimized hyperparameters Identified unusual events that may indicate system issues Application code coverage CI/CD deployment reports 7 / 49 Which of the following is a common model deployment pattern? Blue-Green Deployment Static Scaling Round-Robin Compilation Git Rebase Deployment 8 / 49 Which of the following ensures reproducibility in ML experiments? Versioning code, data, and models Avoiding CI/CD Skipping documentation Manual hyperparameter tuning only 9 / 49 Which monitoring metric is MOST relevant in MLOps? Number of Git commits Model accuracy and drift detection CPU utilization only Website traffic 10 / 49 Why is monitoring critical after model deployment? To detect performance degradation and drift To speed up CI builds only To reduce hardware costs To reduce developer workload 11 / 49 Which of the following tools is commonly associated with AIOps? Moogsoft Terraform Kubernetes Apache Spark 12 / 49 What is the main role of Docker in MLOps pipelines? To containerize ML models for consistent deployment To analyze log anomalies To perform hyperparameter tuning To act as a monitoring dashboard 13 / 49 Why is explainability important in production ML models? To reduce deployment frequency To reduce CI/CD runtime To increase data size To understand model decisions and build trust with stakeholders 14 / 49 What is the role of continuous validation in MLOps Ensures deployed models remain accurate and reliable with new data Improves GPU performance Reduces network traffic Tracks Git commits 15 / 49 What is a common challenge in automating ML pipelines? Automating UI testing Data versioning and reproducibility Cloud billing alerts Writing HTML code 16 / 49 Which of the following ensures fairness and bias detection in ML models? Using random data Responsible AI practices and monitoring Skipping validation Relying on accuracy only 17 / 49 What does CI/CD integration with model registry achieve? Automates promotion of validated models to production Simplifies HTML rendering Improves IDE performance Tracks GitHub issues only 18 / 49 Which stage in MLOps involves hyperparameter tuning? Model training & optimization Monitoring Incident management Deployment 19 / 49 . Which tool is commonly used for workflow orchestration in ML pipelines? Excel Jenkins only Nagios Apache Airflow 20 / 49 Which of the following is an example of CI/CD for ML models? Automating retraining, testing, and deployment of models Running experiments locally only Skipping version control Manual model validation 21 / 49 Which of the following best describes the goal of AIOps? Automating CI/CD pipelines without monitoring Applying AI/ML techniques to IT operations for proactive issue detection Automating infrastructure scaling only Replacing DevOps entirely 22 / 49 Which cloud service provides a fully managed ML pipeline solution? Kubernetes without ML AWS SageMaker Pipelines Photoshop Cloud VMware vSphere 23 / 49 How does AIOps reduce 'alert fatigue? By disabling monitoring tools By correlating events and suppressing noise By automating deployments only By generating more alerts 24 / 49 . What is shadow deployment in MLOps? Deploying on shadow servers only Deploying only half the model Running a new model in parallel with the current one without serving predictions to users Deploying without monitoring 25 / 49 Which of the following is an example of predictive analytics in AIOps? Real-time log streaming Forecasting disk failures before they occur Manual root cause analysis Static capacity planning 26 / 49 What is online learning in ML deployment Offline retraining every month Updating the model incrementally with streaming data Deploying only during office hours Batch scoring only 27 / 49 In a CI/CD pipeline, unit tests for ML models typically validate: User interface design Network bandwidth Operating system drivers Data preprocessing and feature transformations 28 / 49 In MLOps, what is 'model drift'? When models crash during deployment When hyperparameters remain constant When the model is moved between servers When model performance degrades due to changes in data patterns 29 / 49 Which of the following tools integrates monitoring into MLOps pipelines? Tableau only PowerPoint Slack Prometheus & Grafana 30 / 49 Which of the following best describes model governance Processes ensuring compliance, auditability, and security in ML models Hyperparameter optimization Visualization dashboards Anomaly detection only 31 / 49 What is the purpose of a model registry in MLOps? To store cloud infrastructure templates To manage Kubernetes clusters To store, version, and manage trained ML models To track CI/CD pipeline executions 32 / 49 In MLOps, what is 'model lineage? Monitoring server uptime Versioning HTML files Measuring network latency Tracking datasets, code, and parameters that produced a model 33 / 49 Which of the following is NOT a stage in the MLOps lifecycle? Model destruction Model monitoring Model deployment Model training 34 / 49 Which algorithm is often used in AIOps for log anomaly detection? LSTM (Long Short-Term Memory) networks Static Regex Matching Naive Bayes only Decision Trees for UI 35 / 49 . What role does Natural Language Processing (NLP) play in AIOps? Creating CI/CD pipelines Training computer vision models Provisioning infrastructure Parsing log files and correlating incidents 36 / 49 Which orchestrator is commonly used for ML pipelines in Kubernetes? Splunk Kubeflow Pipelines Airflow only Nagios 37 / 49 What is the role of GitOps in MLOp? Training ML models Managing ML infrastructure and deployments declaratively through Git Visualizing anomalies Running hyperparameter optimization 38 / 49 Which AI technique is commonly used in AIOps for anomaly detection? Linear regression only Rule-based filtering Manual log parsing Clustering algorithms 39 / 49 What is Canary Deployment in MLOps? Deploying models only in staging Gradually rolling out a model to a subset of users before full release Deploying models without validation Deploying multiple models in parallel permanently 40 / 49 Which of the following describes Continuous Training (CT) in MLOps? Re-training models regularly with new data Scaling infrastructure on demand Running unit tests for ML code Deploying models continuously without validation 41 / 49 What is the main purpose of MLOps? To replace software engineering practices To build web applications To automate cloud billing processes To integrate ML models into production through CI/CD pipelines 42 / 49 Which tool is widely used for managing ML pipelines? Terraform Kubeflow Nagios Jenkins 43 / 49 Which type of data is MOST commonly analyzed by AIOps platforms? Unstructured IT operations data like logs, metrics, and traces Customer satisfaction surveys Structured business data Video and image datasets 44 / 49 Which metric is best for evaluating classification models in imbalanced dataset? CPU usage Mean Squared Error Precision-Recall AUC Accuracy only 45 / 49 What is blue-green deployment in ML pipelines? Using two ML algorithms simultaneously Maintaining two identical environments (blue and green) to switch traffic safely during updates Running models in GPUs only Splitting training datasets randomly 46 / 49 Which challenge does AIOps primarily address? Manual analysis of large-scale operational data Limited access to GitHub repositories Lack of cloud cost optimization Inability to run unit tests 47 / 49 What is the difference between DevOps and MLOps? DevOps is only for cloud computing DevOps focuses on CI/CD for software, while MLOps extends it to ML models with added steps like training and monitoring MLOps replaces DevOps entirely MLOps is only about data collection 48 / 49 What is 'model rollback' in CI/CD pipelines Resetting hyperparameters Restarting the server Re-training from scratch Reverting to a previous stable model when the new one fails 49 / 49 What is the purpose of data drift detection? To identify changes in input data distribution affecting model performance To version-control datasets To detect server failures To optimize CI/CD runtime Your score is Share this: Click to share on Facebook (Opens in new window) Facebook Click to share on X (Opens in new window) X