IT-kurs
Värmlands län
Du har valgt: Karlstad
Nullstill
Filter
Ferdig

-

Mer enn 100 treff ( i Karlstad ) i IT-kurs
 

Nettkurs 90 minutter 6 000 kr
Denne modulen er bindeleddet mellom den praktiske (Managing Professional) og den strategiske (Strategic Leader) sertifiseringsstrømmen, og er del av begge disse to. [+]
Du vil få tilsendt en «Core guidance» bok og sertifiserings-voucher slik at du kan ta sertifiseringstesten for eksempel hjemme eller på jobb. Denne vil være gyldig i ett år. Tid for sertifiseringstest avtales som beskrevet i e-post med voucher. Eksamen overvåkes av en web-basert eksamensvakt.   Eksamen er på engelsk. Eksamensformen er multiple choice - 40 spørsmål skal besvares, og du består med 70% riktige svar (dvs. 28 av 40). Deltakerne har 1 time og 30 minutter til rådighet på eksamen.  Ingen hjelpemidler er tillatt.  Nødvendige forkunnskaper: Bestått ITIL Foundation sertifisering Gjennomført godkjent kurs/e-læring [-]
Les mer
Oslo Trondheim 5 dager 34 000 kr
29 Apr
20 May
17 Jun
https://www.glasspaper.no/kurs/ccna-implementing-and-administering-cisco-solutions/ [+]
CCNA: Implementing and Administering Cisco Solutions [-]
Les mer
2 dager 16 900 kr
Elasticsearch [+]
Elasticsearch [-]
Les mer
Oslo 1 dag 9 900 kr
07 Jun
07 Jun
09 Sep
ITIL® 4 Practitioner: Incident Management [+]
ITIL® 4 Practitioner: Incident Management [-]
Les mer
Bedriftsintern 3 dager 27 000 kr
In this course, application developers learn how to design, develop, and deploy applications that seamlessly integrate components from the Google Cloud ecosystem. [+]
Through a combination of presentations, demos, and hands-on labs, participants learn how to use GCP services and pre-trained machine learning APIs to build secure, scalable, and intelligent cloud-native applications. Objectives This course teaches participants the following skills: Use best practices for application development Choose the appropriate data storage option for application data Implement federated identity management Develop loosely coupled application components or microservices Integrate application components and data sources Debug, trace, and monitor applications Perform repeatable deployments with containers and deployment services Choose the appropriate application runtime environment; use Google Container Engine as a runtime environment and later switch to a no-ops solution with Google App Engine Flex All courses will be delivered in partnership with ROI Training, Google Cloud Premier Partner, using a Google Authorized Trainer. Course Outline Module 1: Best Practices for Application Development -Code and environment management-Design and development of secure, scalable, reliable, loosely coupled application components and microservices-Continuous integration and delivery-Re-architecting applications for the cloud Module 2: Google Cloud Client Libraries, Google Cloud SDK, and Google Firebase SDK -How to set up and use Google Cloud Client Libraries, Google Cloud SDK, and Google Firebase SDK-Lab: Set up Google Client Libraries, Google Cloud SDK, and Firebase SDK on a Linux instance and set up application credentials Module 3: Overview of Data Storage Options -Overview of options to store application data-Use cases for Google Cloud Storage, Google Cloud Datastore, Cloud Bigtable, Google Cloud SQL, and Cloud Spanner Module 4: Best Practices for Using Cloud Datastore -Best practices related to the following:-Queries-Built-in and composite indexes-Inserting and deleting data (batch operations)-Transactions-Error handling-Bulk-loading data into Cloud Datastore by using Google Cloud Dataflow-Lab: Store application data in Cloud Datastore Module 5: Performing Operations on Buckets and Objects -Operations that can be performed on buckets and objects-Consistency model-Error handling Module 6: Best Practices for Using Cloud Storage -Naming buckets for static websites and other uses-Naming objects (from an access distribution perspective)-Performance considerations-Setting up and debugging a CORS configuration on a bucket-Lab: Store files in Cloud Storage Module 7: Handling Authentication and Authorization -Cloud Identity and Access Management (IAM) roles and service accounts-User authentication by using Firebase Authentication-User authentication and authorization by using Cloud Identity-Aware Proxy-Lab: Authenticate users by using Firebase Authentication Module 8: Using Google Cloud Pub/Sub to Integrate Components of Your Application -Topics, publishers, and subscribers-Pull and push subscriptions-Use cases for Cloud Pub/Sub-Lab: Develop a backend service to process messages in a message queue Module 9: Adding Intelligence to Your Application -Overview of pre-trained machine learning APIs such as Cloud Vision API and Cloud Natural Language Processing API Module 10: Using Cloud Functions for Event-Driven Processing -Key concepts such as triggers, background functions, HTTP functions-Use cases-Developing and deploying functions-Logging, error reporting, and monitoring Module 11: Managing APIs with Google Cloud Endpoints -Open API deployment configuration-Lab: Deploy an API for your application Module 12: Deploying an Application by Using Google Cloud Build, Google Cloud Container Registry, and Google Cloud Deployment Manager -Creating and storing container images-Repeatable deployments with deployment configuration and templates-Lab: Use Deployment Manager to deploy a web application into Google App Engine flexible environment test and production environments Module 13: Execution Environments for Your Application -Considerations for choosing an execution environment for your application or service:-Google Compute Engine-Kubernetes Engine-App Engine flexible environment-Cloud Functions-Cloud Dataflow-Lab: Deploying your application on App Engine flexible environment Module 14: Debugging, Monitoring, and Tuning Performance by Using Google Stackdriver -Stackdriver Debugger-Stackdriver Error Reporting-Lab: Debugging an application error by using Stackdriver Debugger and Error Reporting-Stackdriver Logging-Key concepts related to Stackdriver Trace and Stackdriver Monitoring.-Lab: Use Stackdriver Monitoring and Stackdriver Trace to trace a request across services, observe, and optimize performance [-]
Les mer
Nettkurs 2 timer 3 120 kr
Bluebeam Revu er en komplett PDF-løsning, som lar deg opprette og redigere PDF-dokumenter og tegninger. Videre kan du markere opp og gjøre mengdeuttak fra tegningene, sam... [+]
På dette online-kurset vil du lære: Publisering, redigering, kommentering og markering Sikkerhet, digitale stempler og digital signatur Opprette og lagre symboler og tilpassede markeringsverktøy i Tool Chest Skybasert samarbeid og deling av dokumenter i Bluebeam Studio eXtreme-funksjoner (OCR – Tekstfjerning - Skjema-opprettelse - Batch Link) Noen eXtreme-funksjoner blir vist/nevnt i kurset [-]
Les mer
Virtuelt klasserom 4 dager 25 000 kr
In this course, the student will learn about the data engineering patterns and practices as it pertains to working with batch and real-time analytical solutions using Azu... [+]
COURSE OVERVIEW Students will begin by understanding the core compute and storage technologies that are used to build an analytical solution. They will then explore how to design an analytical serving layers and focus on data engineering considerations for working with source files. The students will learn how to interactively explore data stored in files in a data lake. They will learn the various ingestion techniques that can be used to load data using the Apache Spark capability found in Azure Synapse Analytics or Azure Databricks, or how to ingest using Azure Data Factory or Azure Synapse pipelines. The students will also learn the various ways they can transform the data using the same technologies that is used to ingest data. The student will spend time on the course learning how to monitor and analyze the performance of analytical system so that they can optimize the performance of data loads, or queries that are issued against the systems. They will understand the importance of implementing security to ensure that the data is protected at rest or in transit. The student will then show how the data in an analytical system can be used to create dashboards, or build predictive models in Azure Synapse Analytics. TARGET AUDIENCE The primary audience for this course is data professionals, data architects, and business intelligence professionals who want to learn about data engineering and building analytical solutions using data platform technologies that exist on Microsoft Azure. The secondary audience for this course data analysts and data scientists who work with analytical solutions built on Microsoft Azure. COURSE OBJECTIVES   Explore compute and storage options for data engineering workloads in Azure Design and Implement the serving layer Understand data engineering considerations Run interactive queries using serverless SQL pools Explore, transform, and load data into the Data Warehouse using Apache Spark Perform data Exploration and Transformation in Azure Databricks Ingest and load Data into the Data Warehouse Transform Data with Azure Data Factory or Azure Synapse Pipelines Integrate Data from Notebooks with Azure Data Factory or Azure Synapse Pipelines Optimize Query Performance with Dedicated SQL Pools in Azure Synapse Analyze and Optimize Data Warehouse Storage Support Hybrid Transactional Analytical Processing (HTAP) with Azure Synapse Link Perform end-to-end security with Azure Synapse Analytics Perform real-time Stream Processing with Stream Analytics Create a Stream Processing Solution with Event Hubs and Azure Databricks Build reports using Power BI integration with Azure Synpase Analytics Perform Integrated Machine Learning Processes in Azure Synapse Analytics COURSE CONTENT Module 1: Explore compute and storage options for data engineering workloads This module provides an overview of the Azure compute and storage technology options that are available to data engineers building analytical workloads. This module teaches ways to structure the data lake, and to optimize the files for exploration, streaming, and batch workloads. The student will learn how to organize the data lake into levels of data refinement as they transform files through batch and stream processing. Then they will learn how to create indexes on their datasets, such as CSV, JSON, and Parquet files, and use them for potential query and workload acceleration. Introduction to Azure Synapse Analytics Describe Azure Databricks Introduction to Azure Data Lake storage Describe Delta Lake architecture Work with data streams by using Azure Stream Analytics Lab 1: Explore compute and storage options for data engineering workloads Combine streaming and batch processing with a single pipeline Organize the data lake into levels of file transformation Index data lake storage for query and workload acceleration After completing module 1, students will be able to: Describe Azure Synapse Analytics Describe Azure Databricks Describe Azure Data Lake storage Describe Delta Lake architecture Describe Azure Stream Analytics Module 2: Design and implement the serving layer This module teaches how to design and implement data stores in a modern data warehouse to optimize analytical workloads. The student will learn how to design a multidimensional schema to store fact and dimension data. Then the student will learn how to populate slowly changing dimensions through incremental data loading from Azure Data Factory. Design a multidimensional schema to optimize analytical workloads Code-free transformation at scale with Azure Data Factory Populate slowly changing dimensions in Azure Synapse Analytics pipelines Lab 2: Designing and Implementing the Serving Layer Design a star schema for analytical workloads Populate slowly changing dimensions with Azure Data Factory and mapping data flows After completing module 2, students will be able to: Design a star schema for analytical workloads Populate a slowly changing dimensions with Azure Data Factory and mapping data flows Module 3: Data engineering considerations for source files This module explores data engineering considerations that are common when loading data into a modern data warehouse analytical from files stored in an Azure Data Lake, and understanding the security consideration associated with storing files stored in the data lake. Design a Modern Data Warehouse using Azure Synapse Analytics Secure a data warehouse in Azure Synapse Analytics Lab 3: Data engineering considerations Managing files in an Azure data lake Securing files stored in an Azure data lake After completing module 3, students will be able to: Design a Modern Data Warehouse using Azure Synapse Analytics Secure a data warehouse in Azure Synapse Analytics Module 4: Run interactive queries using Azure Synapse Analytics serverless SQL pools In this module, students will learn how to work with files stored in the data lake and external file sources, through T-SQL statements executed by a serverless SQL pool in Azure Synapse Analytics. Students will query Parquet files stored in a data lake, as well as CSV files stored in an external data store. Next, they will create Azure Active Directory security groups and enforce access to files in the data lake through Role-Based Access Control (RBAC) and Access Control Lists (ACLs). Explore Azure Synapse serverless SQL pools capabilities Query data in the lake using Azure Synapse serverless SQL pools Create metadata objects in Azure Synapse serverless SQL pools Secure data and manage users in Azure Synapse serverless SQL pools Lab 4: Run interactive queries using serverless SQL pools Query Parquet data with serverless SQL pools Create external tables for Parquet and CSV files Create views with serverless SQL pools Secure access to data in a data lake when using serverless SQL pools Configure data lake security using Role-Based Access Control (RBAC) and Access Control List After completing module 4, students will be able to: Understand Azure Synapse serverless SQL pools capabilities Query data in the lake using Azure Synapse serverless SQL pools Create metadata objects in Azure Synapse serverless SQL pools Secure data and manage users in Azure Synapse serverless SQL pools Module 5: Explore, transform, and load data into the Data Warehouse using Apache Spark This module teaches how to explore data stored in a data lake, transform the data, and load data into a relational data store. The student will explore Parquet and JSON files and use techniques to query and transform JSON files with hierarchical structures. Then the student will use Apache Spark to load data into the data warehouse and join Parquet data in the data lake with data in the dedicated SQL pool. Understand big data engineering with Apache Spark in Azure Synapse Analytics Ingest data with Apache Spark notebooks in Azure Synapse Analytics Transform data with DataFrames in Apache Spark Pools in Azure Synapse Analytics Integrate SQL and Apache Spark pools in Azure Synapse Analytics Lab 5: Explore, transform, and load data into the Data Warehouse using Apache Spark Perform Data Exploration in Synapse Studio Ingest data with Spark notebooks in Azure Synapse Analytics Transform data with DataFrames in Spark pools in Azure Synapse Analytics Integrate SQL and Spark pools in Azure Synapse Analytics After completing module 5, students will be able to: Describe big data engineering with Apache Spark in Azure Synapse Analytics Ingest data with Apache Spark notebooks in Azure Synapse Analytics Transform data with DataFrames in Apache Spark Pools in Azure Synapse Analytics Integrate SQL and Apache Spark pools in Azure Synapse Analytics Module 6: Data exploration and transformation in Azure Databricks This module teaches how to use various Apache Spark DataFrame methods to explore and transform data in Azure Databricks. The student will learn how to perform standard DataFrame methods to explore and transform data. They will also learn how to perform more advanced tasks, such as removing duplicate data, manipulate date/time values, rename columns, and aggregate data. Describe Azure Databricks Read and write data in Azure Databricks Work with DataFrames in Azure Databricks Work with DataFrames advanced methods in Azure Databricks Lab 6: Data Exploration and Transformation in Azure Databricks Use DataFrames in Azure Databricks to explore and filter data Cache a DataFrame for faster subsequent queries Remove duplicate data Manipulate date/time values Remove and rename DataFrame columns Aggregate data stored in a DataFrame After completing module 6, students will be able to: Describe Azure Databricks Read and write data in Azure Databricks Work with DataFrames in Azure Databricks Work with DataFrames advanced methods in Azure Databricks Module 7: Ingest and load data into the data warehouse This module teaches students how to ingest data into the data warehouse through T-SQL scripts and Synapse Analytics integration pipelines. The student will learn how to load data into Synapse dedicated SQL pools with PolyBase and COPY using T-SQL. The student will also learn how to use workload management along with a Copy activity in a Azure Synapse pipeline for petabyte-scale data ingestion. Use data loading best practices in Azure Synapse Analytics Petabyte-scale ingestion with Azure Data Factory Lab 7: Ingest and load Data into the Data Warehouse Perform petabyte-scale ingestion with Azure Synapse Pipelines Import data with PolyBase and COPY using T-SQL Use data loading best practices in Azure Synapse Analytics After completing module 7, students will be able to: Use data loading best practices in Azure Synapse Analytics Petabyte-scale ingestion with Azure Data Factory Module 8: Transform data with Azure Data Factory or Azure Synapse Pipelines This module teaches students how to build data integration pipelines to ingest from multiple data sources, transform data using mapping data flowss, and perform data movement into one or more data sinks. Data integration with Azure Data Factory or Azure Synapse Pipelines Code-free transformation at scale with Azure Data Factory or Azure Synapse Pipelines Lab 8: Transform Data with Azure Data Factory or Azure Synapse Pipelines Execute code-free transformations at scale with Azure Synapse Pipelines Create data pipeline to import poorly formatted CSV files Create Mapping Data Flows After completing module 8, students will be able to: Perform data integration with Azure Data Factory Perform code-free transformation at scale with Azure Data Factory Module 9: Orchestrate data movement and transformation in Azure Synapse Pipelines In this module, you will learn how to create linked services, and orchestrate data movement and transformation using notebooks in Azure Synapse Pipelines. Orchestrate data movement and transformation in Azure Data Factory Lab 9: Orchestrate data movement and transformation in Azure Synapse Pipelines Integrate Data from Notebooks with Azure Data Factory or Azure Synapse Pipelines After completing module 9, students will be able to: Orchestrate data movement and transformation in Azure Synapse Pipelines Module 10: Optimize query performance with dedicated SQL pools in Azure Synapse In this module, students will learn strategies to optimize data storage and processing when using dedicated SQL pools in Azure Synapse Analytics. The student will know how to use developer features, such as windowing and HyperLogLog functions, use data loading best practices, and optimize and improve query performance. Optimize data warehouse query performance in Azure Synapse Analytics Understand data warehouse developer features of Azure Synapse Analytics Lab 10: Optimize Query Performance with Dedicated SQL Pools in Azure Synapse Understand developer features of Azure Synapse Analytics Optimize data warehouse query performance in Azure Synapse Analytics Improve query performance After completing module 10, students will be able to: Optimize data warehouse query performance in Azure Synapse Analytics Understand data warehouse developer features of Azure Synapse Analytics Module 11: Analyze and Optimize Data Warehouse Storage In this module, students will learn how to analyze then optimize the data storage of the Azure Synapse dedicated SQL pools. The student will know techniques to understand table space usage and column store storage details. Next the student will know how to compare storage requirements between identical tables that use different data types. Finally, the student will observe the impact materialized views have when executed in place of complex queries and learn how to avoid extensive logging by optimizing delete operations. Analyze and optimize data warehouse storage in Azure Synapse Analytics Lab 11: Analyze and Optimize Data Warehouse Storage Check for skewed data and space usage Understand column store storage details Study the impact of materialized views Explore rules for minimally logged operations After completing module 11, students will be able to: Analyze and optimize data warehouse storage in Azure Synapse Analytics Module 12: Support Hybrid Transactional Analytical Processing (HTAP) with Azure Synapse Link In this module, students will learn how Azure Synapse Link enables seamless connectivity of an Azure Cosmos DB account to a Synapse workspace. The student will understand how to enable and configure Synapse link, then how to query the Azure Cosmos DB analytical store using Apache Spark and SQL serverless. Design hybrid transactional and analytical processing using Azure Synapse Analytics Configure Azure Synapse Link with Azure Cosmos DB Query Azure Cosmos DB with Apache Spark pools Query Azure Cosmos DB with serverless SQL pools Lab 12: Support Hybrid Transactional Analytical Processing (HTAP) with Azure Synapse Link Configure Azure Synapse Link with Azure Cosmos DB Query Azure Cosmos DB with Apache Spark for Synapse Analytics Query Azure Cosmos DB with serverless SQL pool for Azure Synapse Analytics After completing module 12, students will be able to: Design hybrid transactional and analytical processing using Azure Synapse Analytics Configure Azure Synapse Link with Azure Cosmos DB Query Azure Cosmos DB with Apache Spark for Azure Synapse Analytics Query Azure Cosmos DB with SQL serverless for Azure Synapse Analytics Module 13: End-to-end security with Azure Synapse Analytics In this module, students will learn how to secure a Synapse Analytics workspace and its supporting infrastructure. The student will observe the SQL Active Directory Admin, manage IP firewall rules, manage secrets with Azure Key Vault and access those secrets through a Key Vault linked service and pipeline activities. The student will understand how to implement column-level security, row-level security, and dynamic data masking when using dedicated SQL pools. Secure a data warehouse in Azure Synapse Analytics Configure and manage secrets in Azure Key Vault Implement compliance controls for sensitive data Lab 13: End-to-end security with Azure Synapse Analytics Secure Azure Synapse Analytics supporting infrastructure Secure the Azure Synapse Analytics workspace and managed services Secure Azure Synapse Analytics workspace data After completing module 13, students will be able to: Secure a data warehouse in Azure Synapse Analytics Configure and manage secrets in Azure Key Vault Implement compliance controls for sensitive data Module 14: Real-time Stream Processing with Stream Analytics In this module, students will learn how to process streaming data with Azure Stream Analytics. The student will ingest vehicle telemetry data into Event Hubs, then process that data in real time, using various windowing functions in Azure Stream Analytics. They will output the data to Azure Synapse Analytics. Finally, the student will learn how to scale the Stream Analytics job to increase throughput. Enable reliable messaging for Big Data applications using Azure Event Hubs Work with data streams by using Azure Stream Analytics Ingest data streams with Azure Stream Analytics Lab 14: Real-time Stream Processing with Stream Analytics Use Stream Analytics to process real-time data from Event Hubs Use Stream Analytics windowing functions to build aggregates and output to Synapse Analytics Scale the Azure Stream Analytics job to increase throughput through partitioning Repartition the stream input to optimize parallelization After completing module 14, students will be able to: Enable reliable messaging for Big Data applications using Azure Event Hubs Work with data streams by using Azure Stream Analytics Ingest data streams with Azure Stream Analytics Module 15: Create a Stream Processing Solution with Event Hubs and Azure Databricks In this module, students will learn how to ingest and process streaming data at scale with Event Hubs and Spark Structured Streaming in Azure Databricks. The student will learn the key features and uses of Structured Streaming. The student will implement sliding windows to aggregate over chunks of data and apply watermarking to remove stale data. Finally, the student will connect to Event Hubs to read and write streams. Process streaming data with Azure Databricks structured streaming Lab 15: Create a Stream Processing Solution with Event Hubs and Azure Databricks Explore key features and uses of Structured Streaming Stream data from a file and write it out to a distributed file system Use sliding windows to aggregate over chunks of data rather than all data Apply watermarking to remove stale data Connect to Event Hubs read and write streams After completing module 15, students will be able to: Process streaming data with Azure Databricks structured streaming Module 16: Build reports using Power BI integration with Azure Synpase Analytics In this module, the student will learn how to integrate Power BI with their Synapse workspace to build reports in Power BI. The student will create a new data source and Power BI report in Synapse Studio. Then the student will learn how to improve query performance with materialized views and result-set caching. Finally, the student will explore the data lake with serverless SQL pools and create visualizations against that data in Power BI. Create reports with Power BI using its integration with Azure Synapse Analytics Lab 16: Build reports using Power BI integration with Azure Synpase Analytics Integrate an Azure Synapse workspace and Power BI Optimize integration with Power BI Improve query performance with materialized views and result-set caching Visualize data with SQL serverless and create a Power BI report After completing module 16, students will be able to: Create reports with Power BI using its integration with Azure Synapse Analytics Module 17: Perform Integrated Machine Learning Processes in Azure Synapse Analytics This module explores the integrated, end-to-end Azure Machine Learning and Azure Cognitive Services experience in Azure Synapse Analytics. You will learn how to connect an Azure Synapse Analytics workspace to an Azure Machine Learning workspace using a Linked Service and then trigger an Automated ML experiment that uses data from a Spark table. You will also learn how to use trained models from Azure Machine Learning or Azure Cognitive Services to enrich data in a SQL pool table and then serve prediction results using Power BI. Use the integrated machine learning process in Azure Synapse Analytics Lab 17: Perform Integrated Machine Learning Processes in Azure Synapse Analytics Create an Azure Machine Learning linked service Trigger an Auto ML experiment using data from a Spark table Enrich data using trained models Serve prediction results using Power BI After completing module 17, students will be able to: Use the integrated machine learning process in Azure Synapse Analytics     [-]
Les mer
Oslo Bergen Og 5 andre steder 2 dager 9 900 kr
13 May
13 May
27 May
Excel Videregående [+]
Excel Videregående [-]
Les mer
2 dager 8 500 kr
Lag verdifulle kundeopplevelser med Design Thinking [+]
Verden er i endring, det snakkes om den fjerde industrielle revolusjon og stadig flere ledere etterspør «radikal digital innovasjon». Men hva betyr det? Hvor skal vi begynne? Og hvordan kan vi sikre en plass med på toget inn i fremtiden, når vi lever i en virkelighet der teknologi utvikler seg eksponentielt og selskaper logaritmisk? Mange mener Design Thinking er svaret på det. Designtenking er et tankesett og en brukerorientert tilnærming til innovasjon. Metoden kombinerer designernes iterative tilnærming til tjeneste- og produktutvikling, med økonomenes analytiske og strategiske metoder for forretningsutvikling. Resultatet blir løsninger som har større sannsynlighet for å svare på brukerbehovene, er lønnsomme og i tråd med forretningsstrategi. Bli med på to dagers intensivt kurs i Design Thinking, lær å lage knallgode kundeopplevelser. Mål og gjennomføring Kurset er en blanding av praktisk workshop og foredrag med fokus på kundeopplevelse og de enorme digitale mulighetene vi har i dag. Med en «fail fast, fail cheap» tilnærming skal vi få kjenne på kroppen hva det betyr å ikke forelske seg i første idé, samarbeide på tvers av fagdisipliner og ikke minst ALLTID ha brukeren i fokus. Vi vil jobbe med å kartlegge kundens brukerbehov og jobber, og videre designe verdiforslag, tar valg, utforske gode forretningsmodell, teste, evaluerer og ikke minst LÆRE. Målet er at du skal forlate kurset med en verktøykasse du kan bruke på din egen arbeidsplass. Kurset inneholder: Forståelse av dagens digitale landskap Strategisk arbeid med innovasjon Design Thinking: teori og praktiske verktøy for design av verdifulle kundeopplevelser Kursleder: André Nordal Sylte, fagleder kundekonsept i DNB. Han jobber med å utforske og spesifisere prioriterte kundesegmenters viktigste behov, og designe verdiforslag til disse. Han har tidligere jobbet i Deloitte og Creuna. André er veldig kunnskapsrik og inspirerende, vi lover deg to meget lærerike og innholdsrike kvelder. Tid: 25. – 26. november kl. 17 – 21, matservering fra kl. 1630.   [-]
Les mer
Klasserom + nettkurs 1 dag 5 490 kr
Excel controllere/økonomer kurs vinklet fra en Controllers hverdag med fokus på gode metoder for å arbeid med Excel-lister. Påfyllet kurset gir deg, gjør deg i stand... [+]
Excel controllere/økonomer kurs vinklet fra en Controllers hverdag med fokus på gode metoder for å arbeide med Excel-lister. Påfyllet kurset gir deg, gjør deg i stand til å jobbe mer effektivt i Excel. Kursets mange eksempler, viser hva du virkelig kan få til i Excel, enten det er å jobbe med formler / funksjoner, lister eller store Excel-modeller. Kurset er utviklet av controllere, med det for øye at du skal kunne angripe dine Excel-utfordringer på en smart og effektiv måte. Kurset kan også spesialtilpasses og holdes bedriftsinternt i deres eller våre lokaler.   Kursinnhold:     Datautveksling/Klargjøring av data Mulige inndatametoder: Importering av tekstfil, webspørringer, innliming av data fra WWW. Importere data med ulikt dataformat: tekst - eller databaseformat Formatproblemer ifm. dataimport: ”feil” formater, fjerne deler av informasjon i en celle, fjerne duplikater, kjapt finne skrivefeil   Data/Konsolider Konsolider data: hvor data ligger i samme regneark Konsolidering data: hvor dataene skal behandles på tvers av regneark. Konsolidering data: hvor data ligger i ulike arbeidsbøker   Tabeller/Listefunksjonalitet Definisjonen av en tabell: Viktig å huske når tabellen skal behandles videre. Jobbe med en liste: få gode råd angående arbeid med lister Sortere tabellen: kjapt og enkelt med egendefinerte sorteringsnøkler Filter med Autofilter: filtrere ut bare de data du ønsker eller slette tomme rader. Filter med Avansert filter: få råd om i hvilke situasjoner det er smart å bruke avansert filter og hvordan du enklest bruker dette filteret. Beregninger i tabeller og lister Beregninger med flere variabler: Funksjonene Dsummer, Dantall og DGjennomsnitt Organiser og beregne data : Ved hjelp av delsammendrag. Lær hvordan resultatet kan kopieres til et annet sted. Oppslag i en liste: Funksjonene Finn.Kolonne og Finn.Rad Lag rapporter ved hjelp av Pivotteknikk: Opplev hvor enkelt det er å lage rapporter ved hjelp av pivotteknikk!   "Hva hvis"-analyser og optimalisering Sensitivitetsanalyse: Målsøking er en måte å regne ”baklengs” på. Sensitivitetsanalyse: Tabeller med 1 og 2 variabler Optimalisering: Ved hjelp av Problemløseren   Samarbeid med andre Deling av arbeidsbøker: ved ønske om å jobbe samtidig i samme arbeidsbok Sporing av endringer: når du ønsker å plukke opp de endringer andre har gjort     4 gode grunner til å velge KnowledgeGroup 1. Best practice kursinnhold 2. Markedets beste instruktører 3. Små kursgrupper 4. Kvalitets- og startgaranti   [-]
Les mer
Oslo 4 dager 22 500 kr
27 May
27 May
30 Sep
MB-220: Dynamics 365 Customer Insights - Journeys [+]
MB-220: Dynamics 365 Customer Insights - Journeys [-]
Les mer
Virtuelt klasserom 2 dager 17 350 kr
02 May
Due to the Coronavirus the course instructor is not able to come to Oslo. As an alternative we offer this course as a Blended Virtual Course. [+]
Blended Virtual CourseThe course is a hybrid of virtual training and self-study which will be a mixture of teaching using Microsoft Teams/Zoom for short bursts at the beginning of the day, then setting work for the rest of the day and then coming back at the end of the day for another on-line session for any questions before setting homework in the form of practice exams for the evening. KursinnholdDette 2-dagers kurset passer for deg som ønsker å ta en sertifisering innen Agile Testing. Kurset bygger på ISTQB Foundation syllabus og gir deg grunnleggende ferdigheter innen Agile testing. Kursdato: 14.-15. desember, eksamen 16. desember, kl. 09:00-10:15 Bouvet sine kursdeltakeres testresultater vs ISTQB gjennomsnitt. On completion the Agile Tester will be able to: 1. Understand the fundamentals of Agile Software Development How the various agile approaches differ and understanding the concepts of the Agile ManifestoHow the tester needs to adapt in the agile process for maximum effectiveness. Apply the various aspects relating to agile, such as:o Writing and reviewing User Storieso Working in a continuous integrated environment ando Performing agile retrospectives to improve the process 2. Apply the fundamental Agile testing principles, practices and processes How testing differs when working in an agile lifecycle compared to a more traditional lifecycleHow to work in a highly collaborative and integrated environment.How independent testing can be used within an agile projectHow to report progress and the quality of the product to business stakeholdersUnderstand the role and skills of a tester within an agile team 3. Know the key testing methods, techniques and tools to use within an Agile project Understand Test Driven Development (TDD), Acceptance Driven Development (ADD), Behaviour Driven Development (BDD) and the concepts of the Test Pyramid.Perform the role of a tester within a Scrum teamo Perform test estimation and assess product quality risks within an agile projecto Interpret the information produced during an agile project to support test activitieso Write ADD test caseso Write test cases for both functional and non-functional user storieso Execute exploratory testing within an agile projectRecognise the various tools available to the tester for the various agile activities The exam The ISTQB® Agile Testing exam is a 1 hour 15 minute multiple-choice exam with the pass mark being 65%. You must hold the ISTQB® Foundation certificate in software testing in order to sit this exam.The exam is a remote proctored exam. [-]
Les mer
Virtuelt eller personlig 3 dager 12 480 kr
Kurset MagiCAD VVS for AutoCAD gir en gjennomgang av prosjektering av ventilasjon- og rørinstallasjoner i MagiCAD og AutoCAD. [+]
Fleksible kurs for fremtiden Ny kunnskap skal gi umiddelbar effekt, og samtidig være holdbar og bærekraftig på lang sikt. NTI AS har 30 års erfaring innen kurs og kompetanseheving, og utdanner årlig rundt 10.000 personer i Nord Europa innen CAD, BIM, industri, design og konstruksjon.   MagiCAD VVS for AutoCAD grunnkurs Her er et utvalg av temaene du vil lære på kurset: Etablering av prosjekt Prosjektering av ventilasjonsanlegg, varmeanlegg, og sanitæranlegg Sammenkobling av systemer gjennom flere tegninger Tekstefunksjoner, snitt, tegninger til utskrift Beregninger, utbalansering, lyd, mengdeberegning Bruk av leverandørspesifike produkter Kollisjonskontroll Automatisk generering av utsparinger Deltakerne skal lære å håndtere tegninger i et prosjekt; arkitekt, VVS-tegninger etc. De skal lære å berike en VVS-modell slik at mest mulig informasjon kan nyttes med hensyn til BIM, 2D-tegninger, strømningstekniske beregninger og lydberegninger. Tilpassete kurs for bedrifter Vi vil at kundene våre skal være best på det de gjør - hele tiden.  Derfor tenker vi langsiktig om kompetanseutvikling og ser regelmessig kunnskapsløft som en naturlig del av en virksomhet. Vårt kurskonsept bygger på et moderne sett av ulike læringsmiljøer, som gjør det enkelt å finne riktig løsning uansett behov. Ta kontakt med oss på telefon 483 12 300, epost: salg@nticad.no eller les mer på www.nticad.no [-]
Les mer
Virtuelt klasserom 2 timer 1 690 kr
På webinaret går vi gjennom oppsett og presentasjonsopplevelser i Microsoft Teams møter. [+]
På webinaret går vi gjennom oppsett og presentasjonsopplevelser i Microsoft Teams møter. Teams er i rask utvikling og spesielt på presentasjonsfronten har det skjedd store endringer i det siste. Dette webinaret tar for seg alternativer i oppsett av møtet, tildele riktige roller og skape den beste presentasjonsopplevelsen for deltakerne. Opprette Teams-møter – via Outlook eller Teams | Ad Hoc møter | Kanalmøter | Innstillinger for møtet – i innkallelsen | Innstillinger og administrasjon av møtet – når møtet er i gang i Teams | Visninger | Presentere/dele – hva og hvordan| Presentasjonsvisninger | Teksting | Opptak | Møtenotater – før, under og etter Pris: 1690 kroner [-]
Les mer
Virtuelt eller personlig Hele landet 1 dag 5 950 kr
28 May
Målsetning: Oppnå grunnleggende ferdigheter i å navigere, kommunisere, og gjøre mengdeuttak fra BIM-modeller i IFC-formatet med bruk av Solibri Site. [+]
Fleksible kurs for fremtidenNy kunnskap skal gi umiddelbar effekt, og samtidig være holdbar og bærekraftig på lang sikt. NTI AS har 30 års erfaring innen kurs og kompetanseheving, og utdanner årlig rundt 10.000 personer i Nord Europa innen CAD, BIM, industri, design og konstruksjon.   Målsetning: Oppnå grunnleggende ferdigheter i å navigere, kommunisere, og gjøre mengdeuttak fra BIM-modeller i IFC-formatet med bruk av Solibri Site.   På kurset vil du lære å: Sammenstille og navigere i IFC-modeller Velge ut grupper av objekter for nærmere studier Legge inn snitt, måle, markere og opprette slides fra visninger av modellen Opprette og følge opp pågående saker i form av Excel- og BCF-rapporter Se på resultatet av utførte regelsjekker i modellen Høste informasjoner og mengder fra modellen basert på eksisterende maler [HF-NA1] og klassifikasjoner Skape egne klassifikasjoner og definisjoner for mengdeuttak Spesialtilpasset kurs: NTI anbefaler spesialtilpassede kurs for bedrifter som planlegger å sende to eller flere deltakere på Solibri-kurs. Grunnen til dette er at Solibri brukes av mange forskjellige aktører og profesjoner i BAE-bransjen, og følgelig blir de åpne kursene ofte for generelle for enkelte kursdeltakere. I et spesialtilpasset kurs vil vår kurskonsulent kartlegge fokusområdene i forkant av kurset, og gjennomføre kurset i henhold til selskapets behov, gjerne basert på kundens egne modeller. Utbyttet av kurset blir følgelig mye større. Ta kontakt med oss på telefon 483 12 300, epost: salg-no@nti.biz eller les mer på www.nti.biz [-]
Les mer