IT-kurs
Kristiansund
Du har valgt: KRISTIANSUND N
Nullstill
Filter
Ferdig

-

Mer enn 100 treff ( i KRISTIANSUND N ) i IT-kurs
 

Nettkurs 40 minutter 5 600 kr
MoP®, er et rammeverk og en veiledning for styring av prosjekter og programmer i en portefølje. Sertifiseringen MoP Foundation gir deg en innføring i porteføljestyring me... [+]
Du vil få tilsendt en «Core guidance» bok og sertifiserings-voucher slik at du kan ta sertifiseringstesten for eksempel hjemme eller på jobb. Denne vil være gyldig i ett år. Tid for sertifiseringstest avtales som beskrevet i e-post med voucher. Eksamen overvåkes av en web-basert eksamensvakt.   Eksamen er på engelsk. Eksamensformen er multiple choice - 50 spørsmål skal besvares, og du består ved 50% korrekte svar (dvs 25 av 50 spørsmål). Deltakerne har 40 minutter til rådighet på eksamen.  Ingen hjelpemidler er tillatt.   Nødvendige forkunnskaper: Ingen [-]
Les mer
Nettkurs 12 måneder 9 000 kr
ITIL® 4 Specialist: Create, Deliver and Support dekker «kjernen» i ITIL®, aktiviteter rundt administrasjon av tjenester, og utvider omfanget av ITIL® til å omfatte «oppre... [+]
Kurset fokuserer på integrering av forskjellige verdistrømmer og aktiviteter for å lage, levere og støtte IT-aktiverte produkter og tjenester, samtidig som den dekker støtte for praksis, metoder og verktøy. Kurset gir kandidatene forståelse for tjenestekvalitet og forbedringsmetoder. E-læringskurset inneholder 18 timer med undervisning, og er delt inn i 8 moduler. Les mer om ITIL® 4 på AXELOS sine websider. Inkluderer: Tilgang til ITIL® 4 Specialist: Create, Deliver and Support e-læring (engelsk) i 12 måneder. ITIL® 4 Specialist: Create, Deliver and Support online voucher til sertifiseringstest.   ITIL®/PRINCE2®/MSP®/MoP® are registered trademarks of AXELOS Limited, used under permission of AXELOS Limited. All rights reserved. [-]
Les mer
2 dager
ISO/IEC 27001 Foundation gir deg en grunnleggende innføring i informasjonssikkerhet og sikkerhetsstyring. [+]
Bedriftstilbud! Vi tilbyr nå rabatterte priser for på PECB ISO/IEC Foundation 27001 som bedriftsinternt kurs for grupper (minimum 5 personer).Kurset kan holdes i lokalene til Orange Cyberdefense på Lysaker Torg, eller hos dere. Kontakt oss for et godt tilbud.  Har dere sikkerhetskompetansen dere trenger? Orange Cyberdefense holder sertifiseringskurs innen cyber- og informasjonssikkerhet levert av den anerkjente sertifisering- og kursleverandøren PECB. PECB ISO/IEC 27001 Foundation er et 2 dagers kurs, der man vil lære om de grunnleggende elementene for å implementere og lede et styringssystem for informasjonssikkerhet (ISMS) i samsvar med ISO/IEC 27001. Etter kurset vil kursdeltagerne forstå de ulike delene i et ISMS, inkludert policyer, rutiner, måling, internrevisjon og kontinuerlig forbedring. Kurset er ment for bedriftens ansatte som skal delta i innføringen av et styringssystem for informasjonssikkerhet, eller som trenger å lære mer om informasjonssikkerhet og styring av dette i en virksomhet.  Læringsmål for kurset Kjennskap til ISO/IEC 27001 og kunne se sammenhenger mellom ISO/IEC 27001, ISO/IEC 27002 og andre rammeverk. Forstå ulike tilnærminger, standarder, metoder og teknikker som brukes for å implementere og styre et ISMS. Kunne drifte elementene i et styringssystem for informasjonssikkerhet ISMS.   Bindende påmelding Vi forbeholder oss retten til å utsette kurs ved for få deltakere/ påmeldte. Tilbud om ny kursdato vil bli gitt.  [-]
Les mer
Nettstudie 2 semester 4 980 kr
På forespørsel
Skadelig programvare: sikkerhetshull, informasjonskapsler, virus og antivirus Nettverk: Virtuelle private nett (VPN), brannmur, demilitarisert sone (DMZ), tjenestenektang... [+]
Studieår: 2013-2014   Gjennomføring: Høst og vår Antall studiepoeng: 5.0 Forutsetninger: Ingen. Innleveringer: For å kunne gå opp til eksamen må 8 av 12 øvinger være godkjent. Personlig veileder: ja Vurderingsform: Skriftlig, individuell, 3 timer,  Ansvarlig: Olav Skundberg Eksamensdato: 16.12.13 / 26.05.14         Læremål: KUNNSKAPER:Kandidaten kan:- forklare hvordan en datamaskin utsettes for angrep gjennom skadelig programvare og hvordan man kan beskytte seg mot dette- beskrive ulike typer nettbaserte angrep og hvordan man kan beskytte seg mot dette- beskrive ulike krypteringsmekanismer og forklare hvordan digitale sertifikat brukes for å oppnå sikre tjenester.- referere til aktuelle lover og retningslinjer innen sikkerhet- gjøre greie for en organisasjonsmessig informasjonssikkerhetssikkerhetspolicy FERDIGHETER:Kandidaten kan:- kontrollere egen PC for skadelig programvare- kontrollere at installert programvare er oppdatert- utføre pakkefangst med Wireshark og tolke resultatet GENERELL KOMPETANSE:Kandidaten:- er bevisst på å holde programvare oppdatert og å bruke nettvett Innhold:Skadelig programvare: sikkerhetshull, informasjonskapsler, virus og antivirus Nettverk: Virtuelle private nett (VPN), brannmur, demilitarisert sone (DMZ), tjenestenektangrep Sikre tjenester: Krypteringsmetoder og sjekksum. Digitale sertifikater og Public Key Infrastructure (PKI) Samfunn og virksomhet: ekom-loven og personvernloven. Sikkerhetshåndbok og ISO27001Les mer om faget herDemo: Her er en introduksjonsvideo for faget Påmeldingsfrist: 25.08.13 / 25.01.14         Velg semester:  Høst 2013    Vår 2014     Fag Internett og sikkerhet 4980,-         Semesteravgift og eksamenskostnader kommer i tillegg.  [-]
Les mer
Oslo Bergen Og 1 annet sted 5 dager 25 900 kr
27 May
27 May
01 Jul
C# 12 Development and .NET 8 [+]
C# 12 Development and .NET 8 [-]
Les mer
2 dager 12 900 kr
Ønsker du å jobbe med ulike tegninger i Visio, men føler du ikke mestrer programmet? Vil du i tillegg kunne lage egne maler for å jobbe mer effektivt? Da er ”Visio ... [+]
Ønsker du å jobbe med ulike tegninger i Visio, men føler du ikke mestrer programmet? Vil du i tillegg kunne lage egne maler for å jobbe mer effektivt? Da er ”Visio Grunnleggende” kurset for deg! Kurset kan også spesialtilpasses og holdes bedriftsinternt i deres eller våre lokaler.   Kursinnhold:   Dag 1    Hva er Visio? Få oversikt. Bli kjent med programvinduet og hvordan du kan tilpasse det etter dine behov. Mal. Hvordan er en mal bygd opp og hvordan jobbe med en tegning? Formatering. Lær å formatere og hva formateringsbegrepet betyr. Sjablonger og figurer. Hva er sjablonger og figurer?   Å jobbe effektivt med Visio Bygge opp en tegning. Lær å bygge opp en tegning fra bunnen av. Hurtigtaster. Effektiv bruk av tastatur og mus. Formatering. Bruk formatering for å gjøre tegningene oversiktlige og informasjonen mest mulig tilgjengelig. Ark. Lær å jobbe med flere ark, navngi dem, slette dem, bruke bakgrunner etc. Praktisk oppgaveløsing. Jobb med skreddersydde oppgaver innenfor dagens temaer. Andre Office-programmer. Lær å bruke Visio-tegninger i andre Office-programmer.   Flytskjema og organisasjonskart Koblinger. Lær å koble figurer på en effektiv måte. Oppsett. Hvordan sørge for at figurene står plassert på en nøyaktig og oversiktlig måte? Navigasjon. Bygge opp praktisk navigasjon mellom sidene i en større tegning.   Dag 2    Nettverksdiagram Figurdata. Knytt praktisk informasjon til figurene i tegningen. Rapporter. Hvordan hente ut rapporter fra en tegning?   Prosjektplaner Tidslinje. Illustrere faser i et prosjekt på en oversiktlig måte. Gantt-diagram. Vise prosjektinformasjon på en mer detaljert måte. Utskrift. Få oversikt over de vanligste problemstillingene ved utskrift.   Egne maler Maler. Hva er maler, deres styrke og hvordan kan jeg utnytte dem best mulig i mitt arbeid? Sjablonger. Bygge opp en egen samling med de figurene du skal bruke. Figurer. Lær å lage egne tilpassede figurer. Praktisk oppgaveløsing. Jobb med skreddersydde oppgaver innenfor dagens temaer.   4 gode grunner til å velge KnowledgeGroup 1. Best practice kursinnhold 2. Markedets beste instruktører 3. Små kursgrupper 4. Kvalitets- og startgaranti   [-]
Les mer
Bedriftsintern 3 dager 27 000 kr
In this course, application developers learn how to design, develop, and deploy applications that seamlessly integrate components from the Google Cloud ecosystem. [+]
Through a combination of presentations, demos, and hands-on labs, participants learn how to use GCP services and pre-trained machine learning APIs to build secure, scalable, and intelligent cloud-native applications. Objectives This course teaches participants the following skills: Use best practices for application development Choose the appropriate data storage option for application data Implement federated identity management Develop loosely coupled application components or microservices Integrate application components and data sources Debug, trace, and monitor applications Perform repeatable deployments with containers and deployment services Choose the appropriate application runtime environment; use Google Container Engine as a runtime environment and later switch to a no-ops solution with Google App Engine Flex All courses will be delivered in partnership with ROI Training, Google Cloud Premier Partner, using a Google Authorized Trainer. Course Outline Module 1: Best Practices for Application Development -Code and environment management-Design and development of secure, scalable, reliable, loosely coupled application components and microservices-Continuous integration and delivery-Re-architecting applications for the cloud Module 2: Google Cloud Client Libraries, Google Cloud SDK, and Google Firebase SDK -How to set up and use Google Cloud Client Libraries, Google Cloud SDK, and Google Firebase SDK-Lab: Set up Google Client Libraries, Google Cloud SDK, and Firebase SDK on a Linux instance and set up application credentials Module 3: Overview of Data Storage Options -Overview of options to store application data-Use cases for Google Cloud Storage, Google Cloud Datastore, Cloud Bigtable, Google Cloud SQL, and Cloud Spanner Module 4: Best Practices for Using Cloud Datastore -Best practices related to the following:-Queries-Built-in and composite indexes-Inserting and deleting data (batch operations)-Transactions-Error handling-Bulk-loading data into Cloud Datastore by using Google Cloud Dataflow-Lab: Store application data in Cloud Datastore Module 5: Performing Operations on Buckets and Objects -Operations that can be performed on buckets and objects-Consistency model-Error handling Module 6: Best Practices for Using Cloud Storage -Naming buckets for static websites and other uses-Naming objects (from an access distribution perspective)-Performance considerations-Setting up and debugging a CORS configuration on a bucket-Lab: Store files in Cloud Storage Module 7: Handling Authentication and Authorization -Cloud Identity and Access Management (IAM) roles and service accounts-User authentication by using Firebase Authentication-User authentication and authorization by using Cloud Identity-Aware Proxy-Lab: Authenticate users by using Firebase Authentication Module 8: Using Google Cloud Pub/Sub to Integrate Components of Your Application -Topics, publishers, and subscribers-Pull and push subscriptions-Use cases for Cloud Pub/Sub-Lab: Develop a backend service to process messages in a message queue Module 9: Adding Intelligence to Your Application -Overview of pre-trained machine learning APIs such as Cloud Vision API and Cloud Natural Language Processing API Module 10: Using Cloud Functions for Event-Driven Processing -Key concepts such as triggers, background functions, HTTP functions-Use cases-Developing and deploying functions-Logging, error reporting, and monitoring Module 11: Managing APIs with Google Cloud Endpoints -Open API deployment configuration-Lab: Deploy an API for your application Module 12: Deploying an Application by Using Google Cloud Build, Google Cloud Container Registry, and Google Cloud Deployment Manager -Creating and storing container images-Repeatable deployments with deployment configuration and templates-Lab: Use Deployment Manager to deploy a web application into Google App Engine flexible environment test and production environments Module 13: Execution Environments for Your Application -Considerations for choosing an execution environment for your application or service:-Google Compute Engine-Kubernetes Engine-App Engine flexible environment-Cloud Functions-Cloud Dataflow-Lab: Deploying your application on App Engine flexible environment Module 14: Debugging, Monitoring, and Tuning Performance by Using Google Stackdriver -Stackdriver Debugger-Stackdriver Error Reporting-Lab: Debugging an application error by using Stackdriver Debugger and Error Reporting-Stackdriver Logging-Key concepts related to Stackdriver Trace and Stackdriver Monitoring.-Lab: Use Stackdriver Monitoring and Stackdriver Trace to trace a request across services, observe, and optimize performance [-]
Les mer
Virtuelt eller personlig Bærum 4 dager 14 450 kr
03 Jun
Kurset er rettet mot personer som skal konstruere VVS-anlegg i 3D. [+]
  Fleksible kurs for fremtidenNy kunnskap skal gi umiddelbar effekt, og samtidig være holdbar og bærekraftig på lang sikt. NTI AS har 30 års erfaring innen kurs og kompetanseheving, og utdanner årlig rundt 10.000 personer i Nord Europa innen CAD, BIM, industri, design og konstruksjon.   Revit MEP MagiCAD VVS grunnkurs Her er et utvalg av temaene du vil lære på kurset Grunnleggende innføring i Revit Håndtering av Views, etasjer, manøvering i modell, visningsalternativer, redigeringsfunksjoner. Etablering av prosjekt. Linke modeller, sette opp etasjer og Views Prosjektering av ventilasjonsanlegg, varme- og forbruksvannsystem og avløpssystem IFC-eksport Utsparinger Etter kurset kan du bruke grunnleggende funksjoner i Autodesk Revit MEP og MagiCAD for Revit med fokus på å sette opp, importere og linke prosjekter, konstruere VVS-installasjoner, samt dokumentasjon. Lære å utforme en VVS-modell slik at mest mulig informasjon kan utnyttes med hensyn til BIM.   Dette er et populært kurs, meld deg på nå!   Tilpassete kurs for bedrifterVi vil at kundene våre skal være best på det de gjør - hele tiden.  Derfor tenker vi langsiktig om kompetanseutvikling og ser regelmessig kunnskapsløft som en naturlig del av en virksomhet. Vårt kurskonsept bygger på et moderne sett av ulike læringsmiljøer, som gjør det enkelt å finne riktig løsning uansett behov. Ta kontakt med oss på telefon 483 12 300, epost: salg@nticad.no eller les mer på www.nticad.no [-]
Les mer
Nettstudie 2 semester 4 980 kr
På forespørsel
Introduksjon til grunnleggende programmeringsprinsipper som variabler, datatyper, kontrollstrukturer (løkker og beslutninger), matriser (arrays), egendefinerte funksjoner... [+]
  Studieår: 2013-2014   Gjennomføring: Høst og vår Antall studiepoeng: 5.0 Forutsetninger: Ingen Innleveringer: 6 AV 10 øvinger må være godkjent for å kunne gå opp til eksamen. Vurderingsform: En individuell 4-timers nettbasert hjemmeeksamen. Ansvarlig: Svend Andreas Horgen Eksamensdato: 17.12.13 / 20.05.14         Læremål: KUNNSKAPER:Kandidaten:- kan forklare hva et program er- kan redegjøre for grunnleggende byggestener i programmering, så som variabler, kontrollstrukturer, matriser (arrays) og funksjoner- kan analysere en spesiell problemstilling og planlegge hvordan den kan løses generelt med programkode FERDIGHETER:Kandidaten:- kan bruke et .NET-basert utviklingsmiljø i kodeutvikling- kan lage funksjonelle brukergrensesnitt- kan identifisere feil i programkode- kan lage strukturert programkode som løser enkle problemstillinger- kan anvende innebygde funksjoner fra .NET-rammeverket i egen kode GENERELL KOMPETANSE:Kandidaten:- er bevisst på viktigheten av å eliminere feilsituasjoner Innhold:Introduksjon til grunnleggende programmeringsprinsipper som variabler, datatyper, kontrollstrukturer (løkker og beslutninger), matriser (arrays), egendefinerte funksjoner og innebyde funksjoner. Utforme brukergrensesnitt som er fine å se på og enkle å bruke. Feilhåndtering. Strukturere og planlegge koden på en god måte.Les mer om faget herDemo: Her er en introduksjonsvideo for faget Påmeldingsfrist: 25.08.13 / 25.01.14         Velg semester:  Høst 2013    Vår 2014     Fag Programmering i Visual Basic 4980,-         Semesteravgift og eksamenskostnader kommer i tillegg.  [-]
Les mer
Virtuelt klasserom 3 timer 1 600 kr
Med pivottabeller kan du på en rask og elegant måte summere, analysere, granske og presentere dine data med få klikk. [+]
Med pivottabeller kan du på en rask og elegant måte summere, analysere, granske og presentere dine data med få klikk. Har du store mengder data i Excel? Med verktøyet pivottabeller i Excel har du et kraftig verktøy som gjør at du raskt og enkelt sammenfatter hovedtall for store mengder data. I databehandling er en pivottabell et datavisualiseringsverktøy, og i Excel er dette verktøyet ypperlig for å trekke ut konklusjoner fra store mengder data. Blant andre funksjoner kan en pivottabell automatisk sortere, telle, summere totaler eller gjennomsnitt av de data som er lagret i en tabell eller regneark, og resultatene vises i en annen tabell som viser sammenfattet data. Med pivottabeller kan du på en rask og elegant måte summere, analysere, granske og presentere dine data med få klikk. Pivottabeller gir deg en svært effektiv måte å justere hvordan dine resultater skal vises. På basis av dine pivottabeller kan du også opprette flotte pivotdiagrammer som automatisk oppdateres når det gjøres endringer i dine pivottabeller.  Forkunnskap: Du bør være godt kjent med å jobbe i Microsoft Excel tidligere, og ha forståelse for bruk av formler og funksjoner. Hva er en pivottabell Forstå de ulike datatypene i Excel Opprettet pivottabeller basert på lister eller tabeller Jobbe med pivottabellrapporter Bruk av felt Gruppering i pivottabeller Pivottabelldiagrammer Slicers Tidslinjer Oppdatering av pivottabeller Lag egne kalkyler i pivottabeller Formatering og endring av utseende i pivottabeller Datamodeller & spørringer [-]
Les mer
Oslo Bergen Og 1 annet sted 5 dager 27 500 kr
27 May
27 May
03 Jun
MS-102: Microsoft 365 Administrator Essentials [+]
MS-102: Microsoft 365 Administrator [-]
Les mer
Oslo 5 dager 27 500 kr
10 Jun
10 Jun
https://www.glasspaper.no/kurs/ms-203-microsoft-365-messaging/ [+]
MS-203: Microsoft 365 Messaging (Exchange) [-]
Les mer
Virtuelt klasserom 4 dager 25 000 kr
In this course, the student will learn about the data engineering patterns and practices as it pertains to working with batch and real-time analytical solutions using Azu... [+]
COURSE OVERVIEW Students will begin by understanding the core compute and storage technologies that are used to build an analytical solution. They will then explore how to design an analytical serving layers and focus on data engineering considerations for working with source files. The students will learn how to interactively explore data stored in files in a data lake. They will learn the various ingestion techniques that can be used to load data using the Apache Spark capability found in Azure Synapse Analytics or Azure Databricks, or how to ingest using Azure Data Factory or Azure Synapse pipelines. The students will also learn the various ways they can transform the data using the same technologies that is used to ingest data. The student will spend time on the course learning how to monitor and analyze the performance of analytical system so that they can optimize the performance of data loads, or queries that are issued against the systems. They will understand the importance of implementing security to ensure that the data is protected at rest or in transit. The student will then show how the data in an analytical system can be used to create dashboards, or build predictive models in Azure Synapse Analytics. TARGET AUDIENCE The primary audience for this course is data professionals, data architects, and business intelligence professionals who want to learn about data engineering and building analytical solutions using data platform technologies that exist on Microsoft Azure. The secondary audience for this course data analysts and data scientists who work with analytical solutions built on Microsoft Azure. COURSE OBJECTIVES   Explore compute and storage options for data engineering workloads in Azure Design and Implement the serving layer Understand data engineering considerations Run interactive queries using serverless SQL pools Explore, transform, and load data into the Data Warehouse using Apache Spark Perform data Exploration and Transformation in Azure Databricks Ingest and load Data into the Data Warehouse Transform Data with Azure Data Factory or Azure Synapse Pipelines Integrate Data from Notebooks with Azure Data Factory or Azure Synapse Pipelines Optimize Query Performance with Dedicated SQL Pools in Azure Synapse Analyze and Optimize Data Warehouse Storage Support Hybrid Transactional Analytical Processing (HTAP) with Azure Synapse Link Perform end-to-end security with Azure Synapse Analytics Perform real-time Stream Processing with Stream Analytics Create a Stream Processing Solution with Event Hubs and Azure Databricks Build reports using Power BI integration with Azure Synpase Analytics Perform Integrated Machine Learning Processes in Azure Synapse Analytics COURSE CONTENT Module 1: Explore compute and storage options for data engineering workloads This module provides an overview of the Azure compute and storage technology options that are available to data engineers building analytical workloads. This module teaches ways to structure the data lake, and to optimize the files for exploration, streaming, and batch workloads. The student will learn how to organize the data lake into levels of data refinement as they transform files through batch and stream processing. Then they will learn how to create indexes on their datasets, such as CSV, JSON, and Parquet files, and use them for potential query and workload acceleration. Introduction to Azure Synapse Analytics Describe Azure Databricks Introduction to Azure Data Lake storage Describe Delta Lake architecture Work with data streams by using Azure Stream Analytics Lab 1: Explore compute and storage options for data engineering workloads Combine streaming and batch processing with a single pipeline Organize the data lake into levels of file transformation Index data lake storage for query and workload acceleration After completing module 1, students will be able to: Describe Azure Synapse Analytics Describe Azure Databricks Describe Azure Data Lake storage Describe Delta Lake architecture Describe Azure Stream Analytics Module 2: Design and implement the serving layer This module teaches how to design and implement data stores in a modern data warehouse to optimize analytical workloads. The student will learn how to design a multidimensional schema to store fact and dimension data. Then the student will learn how to populate slowly changing dimensions through incremental data loading from Azure Data Factory. Design a multidimensional schema to optimize analytical workloads Code-free transformation at scale with Azure Data Factory Populate slowly changing dimensions in Azure Synapse Analytics pipelines Lab 2: Designing and Implementing the Serving Layer Design a star schema for analytical workloads Populate slowly changing dimensions with Azure Data Factory and mapping data flows After completing module 2, students will be able to: Design a star schema for analytical workloads Populate a slowly changing dimensions with Azure Data Factory and mapping data flows Module 3: Data engineering considerations for source files This module explores data engineering considerations that are common when loading data into a modern data warehouse analytical from files stored in an Azure Data Lake, and understanding the security consideration associated with storing files stored in the data lake. Design a Modern Data Warehouse using Azure Synapse Analytics Secure a data warehouse in Azure Synapse Analytics Lab 3: Data engineering considerations Managing files in an Azure data lake Securing files stored in an Azure data lake After completing module 3, students will be able to: Design a Modern Data Warehouse using Azure Synapse Analytics Secure a data warehouse in Azure Synapse Analytics Module 4: Run interactive queries using Azure Synapse Analytics serverless SQL pools In this module, students will learn how to work with files stored in the data lake and external file sources, through T-SQL statements executed by a serverless SQL pool in Azure Synapse Analytics. Students will query Parquet files stored in a data lake, as well as CSV files stored in an external data store. Next, they will create Azure Active Directory security groups and enforce access to files in the data lake through Role-Based Access Control (RBAC) and Access Control Lists (ACLs). Explore Azure Synapse serverless SQL pools capabilities Query data in the lake using Azure Synapse serverless SQL pools Create metadata objects in Azure Synapse serverless SQL pools Secure data and manage users in Azure Synapse serverless SQL pools Lab 4: Run interactive queries using serverless SQL pools Query Parquet data with serverless SQL pools Create external tables for Parquet and CSV files Create views with serverless SQL pools Secure access to data in a data lake when using serverless SQL pools Configure data lake security using Role-Based Access Control (RBAC) and Access Control List After completing module 4, students will be able to: Understand Azure Synapse serverless SQL pools capabilities Query data in the lake using Azure Synapse serverless SQL pools Create metadata objects in Azure Synapse serverless SQL pools Secure data and manage users in Azure Synapse serverless SQL pools Module 5: Explore, transform, and load data into the Data Warehouse using Apache Spark This module teaches how to explore data stored in a data lake, transform the data, and load data into a relational data store. The student will explore Parquet and JSON files and use techniques to query and transform JSON files with hierarchical structures. Then the student will use Apache Spark to load data into the data warehouse and join Parquet data in the data lake with data in the dedicated SQL pool. Understand big data engineering with Apache Spark in Azure Synapse Analytics Ingest data with Apache Spark notebooks in Azure Synapse Analytics Transform data with DataFrames in Apache Spark Pools in Azure Synapse Analytics Integrate SQL and Apache Spark pools in Azure Synapse Analytics Lab 5: Explore, transform, and load data into the Data Warehouse using Apache Spark Perform Data Exploration in Synapse Studio Ingest data with Spark notebooks in Azure Synapse Analytics Transform data with DataFrames in Spark pools in Azure Synapse Analytics Integrate SQL and Spark pools in Azure Synapse Analytics After completing module 5, students will be able to: Describe big data engineering with Apache Spark in Azure Synapse Analytics Ingest data with Apache Spark notebooks in Azure Synapse Analytics Transform data with DataFrames in Apache Spark Pools in Azure Synapse Analytics Integrate SQL and Apache Spark pools in Azure Synapse Analytics Module 6: Data exploration and transformation in Azure Databricks This module teaches how to use various Apache Spark DataFrame methods to explore and transform data in Azure Databricks. The student will learn how to perform standard DataFrame methods to explore and transform data. They will also learn how to perform more advanced tasks, such as removing duplicate data, manipulate date/time values, rename columns, and aggregate data. Describe Azure Databricks Read and write data in Azure Databricks Work with DataFrames in Azure Databricks Work with DataFrames advanced methods in Azure Databricks Lab 6: Data Exploration and Transformation in Azure Databricks Use DataFrames in Azure Databricks to explore and filter data Cache a DataFrame for faster subsequent queries Remove duplicate data Manipulate date/time values Remove and rename DataFrame columns Aggregate data stored in a DataFrame After completing module 6, students will be able to: Describe Azure Databricks Read and write data in Azure Databricks Work with DataFrames in Azure Databricks Work with DataFrames advanced methods in Azure Databricks Module 7: Ingest and load data into the data warehouse This module teaches students how to ingest data into the data warehouse through T-SQL scripts and Synapse Analytics integration pipelines. The student will learn how to load data into Synapse dedicated SQL pools with PolyBase and COPY using T-SQL. The student will also learn how to use workload management along with a Copy activity in a Azure Synapse pipeline for petabyte-scale data ingestion. Use data loading best practices in Azure Synapse Analytics Petabyte-scale ingestion with Azure Data Factory Lab 7: Ingest and load Data into the Data Warehouse Perform petabyte-scale ingestion with Azure Synapse Pipelines Import data with PolyBase and COPY using T-SQL Use data loading best practices in Azure Synapse Analytics After completing module 7, students will be able to: Use data loading best practices in Azure Synapse Analytics Petabyte-scale ingestion with Azure Data Factory Module 8: Transform data with Azure Data Factory or Azure Synapse Pipelines This module teaches students how to build data integration pipelines to ingest from multiple data sources, transform data using mapping data flowss, and perform data movement into one or more data sinks. Data integration with Azure Data Factory or Azure Synapse Pipelines Code-free transformation at scale with Azure Data Factory or Azure Synapse Pipelines Lab 8: Transform Data with Azure Data Factory or Azure Synapse Pipelines Execute code-free transformations at scale with Azure Synapse Pipelines Create data pipeline to import poorly formatted CSV files Create Mapping Data Flows After completing module 8, students will be able to: Perform data integration with Azure Data Factory Perform code-free transformation at scale with Azure Data Factory Module 9: Orchestrate data movement and transformation in Azure Synapse Pipelines In this module, you will learn how to create linked services, and orchestrate data movement and transformation using notebooks in Azure Synapse Pipelines. Orchestrate data movement and transformation in Azure Data Factory Lab 9: Orchestrate data movement and transformation in Azure Synapse Pipelines Integrate Data from Notebooks with Azure Data Factory or Azure Synapse Pipelines After completing module 9, students will be able to: Orchestrate data movement and transformation in Azure Synapse Pipelines Module 10: Optimize query performance with dedicated SQL pools in Azure Synapse In this module, students will learn strategies to optimize data storage and processing when using dedicated SQL pools in Azure Synapse Analytics. The student will know how to use developer features, such as windowing and HyperLogLog functions, use data loading best practices, and optimize and improve query performance. Optimize data warehouse query performance in Azure Synapse Analytics Understand data warehouse developer features of Azure Synapse Analytics Lab 10: Optimize Query Performance with Dedicated SQL Pools in Azure Synapse Understand developer features of Azure Synapse Analytics Optimize data warehouse query performance in Azure Synapse Analytics Improve query performance After completing module 10, students will be able to: Optimize data warehouse query performance in Azure Synapse Analytics Understand data warehouse developer features of Azure Synapse Analytics Module 11: Analyze and Optimize Data Warehouse Storage In this module, students will learn how to analyze then optimize the data storage of the Azure Synapse dedicated SQL pools. The student will know techniques to understand table space usage and column store storage details. Next the student will know how to compare storage requirements between identical tables that use different data types. Finally, the student will observe the impact materialized views have when executed in place of complex queries and learn how to avoid extensive logging by optimizing delete operations. Analyze and optimize data warehouse storage in Azure Synapse Analytics Lab 11: Analyze and Optimize Data Warehouse Storage Check for skewed data and space usage Understand column store storage details Study the impact of materialized views Explore rules for minimally logged operations After completing module 11, students will be able to: Analyze and optimize data warehouse storage in Azure Synapse Analytics Module 12: Support Hybrid Transactional Analytical Processing (HTAP) with Azure Synapse Link In this module, students will learn how Azure Synapse Link enables seamless connectivity of an Azure Cosmos DB account to a Synapse workspace. The student will understand how to enable and configure Synapse link, then how to query the Azure Cosmos DB analytical store using Apache Spark and SQL serverless. Design hybrid transactional and analytical processing using Azure Synapse Analytics Configure Azure Synapse Link with Azure Cosmos DB Query Azure Cosmos DB with Apache Spark pools Query Azure Cosmos DB with serverless SQL pools Lab 12: Support Hybrid Transactional Analytical Processing (HTAP) with Azure Synapse Link Configure Azure Synapse Link with Azure Cosmos DB Query Azure Cosmos DB with Apache Spark for Synapse Analytics Query Azure Cosmos DB with serverless SQL pool for Azure Synapse Analytics After completing module 12, students will be able to: Design hybrid transactional and analytical processing using Azure Synapse Analytics Configure Azure Synapse Link with Azure Cosmos DB Query Azure Cosmos DB with Apache Spark for Azure Synapse Analytics Query Azure Cosmos DB with SQL serverless for Azure Synapse Analytics Module 13: End-to-end security with Azure Synapse Analytics In this module, students will learn how to secure a Synapse Analytics workspace and its supporting infrastructure. The student will observe the SQL Active Directory Admin, manage IP firewall rules, manage secrets with Azure Key Vault and access those secrets through a Key Vault linked service and pipeline activities. The student will understand how to implement column-level security, row-level security, and dynamic data masking when using dedicated SQL pools. Secure a data warehouse in Azure Synapse Analytics Configure and manage secrets in Azure Key Vault Implement compliance controls for sensitive data Lab 13: End-to-end security with Azure Synapse Analytics Secure Azure Synapse Analytics supporting infrastructure Secure the Azure Synapse Analytics workspace and managed services Secure Azure Synapse Analytics workspace data After completing module 13, students will be able to: Secure a data warehouse in Azure Synapse Analytics Configure and manage secrets in Azure Key Vault Implement compliance controls for sensitive data Module 14: Real-time Stream Processing with Stream Analytics In this module, students will learn how to process streaming data with Azure Stream Analytics. The student will ingest vehicle telemetry data into Event Hubs, then process that data in real time, using various windowing functions in Azure Stream Analytics. They will output the data to Azure Synapse Analytics. Finally, the student will learn how to scale the Stream Analytics job to increase throughput. Enable reliable messaging for Big Data applications using Azure Event Hubs Work with data streams by using Azure Stream Analytics Ingest data streams with Azure Stream Analytics Lab 14: Real-time Stream Processing with Stream Analytics Use Stream Analytics to process real-time data from Event Hubs Use Stream Analytics windowing functions to build aggregates and output to Synapse Analytics Scale the Azure Stream Analytics job to increase throughput through partitioning Repartition the stream input to optimize parallelization After completing module 14, students will be able to: Enable reliable messaging for Big Data applications using Azure Event Hubs Work with data streams by using Azure Stream Analytics Ingest data streams with Azure Stream Analytics Module 15: Create a Stream Processing Solution with Event Hubs and Azure Databricks In this module, students will learn how to ingest and process streaming data at scale with Event Hubs and Spark Structured Streaming in Azure Databricks. The student will learn the key features and uses of Structured Streaming. The student will implement sliding windows to aggregate over chunks of data and apply watermarking to remove stale data. Finally, the student will connect to Event Hubs to read and write streams. Process streaming data with Azure Databricks structured streaming Lab 15: Create a Stream Processing Solution with Event Hubs and Azure Databricks Explore key features and uses of Structured Streaming Stream data from a file and write it out to a distributed file system Use sliding windows to aggregate over chunks of data rather than all data Apply watermarking to remove stale data Connect to Event Hubs read and write streams After completing module 15, students will be able to: Process streaming data with Azure Databricks structured streaming Module 16: Build reports using Power BI integration with Azure Synpase Analytics In this module, the student will learn how to integrate Power BI with their Synapse workspace to build reports in Power BI. The student will create a new data source and Power BI report in Synapse Studio. Then the student will learn how to improve query performance with materialized views and result-set caching. Finally, the student will explore the data lake with serverless SQL pools and create visualizations against that data in Power BI. Create reports with Power BI using its integration with Azure Synapse Analytics Lab 16: Build reports using Power BI integration with Azure Synpase Analytics Integrate an Azure Synapse workspace and Power BI Optimize integration with Power BI Improve query performance with materialized views and result-set caching Visualize data with SQL serverless and create a Power BI report After completing module 16, students will be able to: Create reports with Power BI using its integration with Azure Synapse Analytics Module 17: Perform Integrated Machine Learning Processes in Azure Synapse Analytics This module explores the integrated, end-to-end Azure Machine Learning and Azure Cognitive Services experience in Azure Synapse Analytics. You will learn how to connect an Azure Synapse Analytics workspace to an Azure Machine Learning workspace using a Linked Service and then trigger an Automated ML experiment that uses data from a Spark table. You will also learn how to use trained models from Azure Machine Learning or Azure Cognitive Services to enrich data in a SQL pool table and then serve prediction results using Power BI. Use the integrated machine learning process in Azure Synapse Analytics Lab 17: Perform Integrated Machine Learning Processes in Azure Synapse Analytics Create an Azure Machine Learning linked service Trigger an Auto ML experiment using data from a Spark table Enrich data using trained models Serve prediction results using Power BI After completing module 17, students will be able to: Use the integrated machine learning process in Azure Synapse Analytics     [-]
Les mer
Virtuelt eller personlig 2 dager 9 250 kr
Lær å bruke egenutviklede scripts direkte i BIM-modellen både i forhold til arbeidet med geometri og BIM-data. [+]
Fleksible kurs for fremtidenNy kunnskap skal gi umiddelbar effekt, og samtidig være holdbar og bærekraftig på lang sikt. NTI AS har 30 års erfaring innen kurs og kompetanseheving, og utdanner årlig rundt 10.000 personer i Nord Europa innen CAD, BIM, industri, design og konstruksjon.   Dynamo for Revit Her er et utvalg av temaene du vil lære på kurset: Intro til brukerflate og grunnleggende funksjoner Dynamo – Revit-interaksjon Parametrisk/Regelbasert Design Geometri i Dynamo Plassering av Revit-elementer Datauttrekk Opprettelse av Analytisk modell Skrive i Revit-parametre/nummerering Tilpasning av Revit-elementer Import og behandling av ekstern geometri Kjenner du til Grasshopper for Rhino og ønsker å komme videre med komplekse geometrier? I så fall er Dynamo en mulighet. Her kan regelbasert design settes opp med direkte integrasjon til Revit. Med Dynamo for Revit åpnes en verden med en hittil usett parametrisk tilgang til prosjektene. Med Dynamo som visuelt programmeringsverktøy kobles egne algoritmer sammen med Revits parametriske database, uansett om fokuset er formgivning, designoptimering, fabrikasjon eller automatisering. Dette, sammen med toveiskommunikasjonen mellom Dynamo og Revit, gjør kombinasjonen både sterk og unik.   Tilpassete kurs for bedrifter Vi vil at kundene våre skal være best på det de gjør - hele tiden.  Derfor tenker vi langsiktig om kompetanseutvikling og ser regelmessig kunnskapsløft som en naturlig del av en virksomhet. Vårt kurskonsept bygger på et moderne sett av ulike læringsmiljøer, som gjør det enkelt å finne riktig løsning uansett behov. Ta kontakt med oss på telefon 483 12 300, epost: salg@nticad.no eller les mer på www.nticad.no [-]
Les mer
Virtuelt klasserom 3 dager 20 000 kr
In this course, the students will implement various data platform technologies into solutions that are in line with business and technical requirements including on-premi... [+]
The students will also explore how to implement data security including authentication, authorization, data policies and standards. They will also define and implement data solution monitoring for both the data storage and data processing activities. Finally, they will manage and troubleshoot Azure data solutions which includes the optimization and disaster recovery of big data, batch processing and streaming data solutions. Agenda Module 1: Azure for the Data Engineer -Explain the evolving world of data-Survey the services in the Azure Data Platform-Identify the tasks that are performed by a Data Engineer-Describe the use cases for the cloud in a Case Study Module 2: Working with Data Storage. -Choose a data storage approach in Azure-Create an Azure Storage Account-Explain Azure Data Lake storage-Upload data into Azure Data Lake Module 3: Enabling Team Based Data Science with Azure Databricks. -Explain Azure Databricks and Machine Learning Platforms-Describe the Team Data Science Process-Provision Azure Databricks and workspaces-Perform data preparation tasks Module 4: Building Globally Distributed Databases with Cosmos DB. -Create an Azure Cosmos DB database built to scale-Insert and query data in your Azure Cosmos DB database-Provision a .NET Core app for Cosmos DB in Visual Studio Code-Distribute your data globally with Azure Cosmos DB Module 5: Working with Relational Data Stores in the Cloud. -SQL Database and SQL Data Warehouse-Provision an Azure SQL database to store data-Provision and load data into Azure SQL Data Warehouse Module 6: Performing Real-Time Analytics with Stream Analytics. Module 7: Orchestrating Data Movement with Azure Data Factory. -Explain how Azure Data Factory works-Create Linked Services and datasets-Create pipelines and activities-Azure Data Factory pipeline execution and triggers Module 8: Securing Azure Data Platforms. -Configuring Network Security-Configuring Authentication-Configuring Authorization-Auditing Security Module 9: Monitoring and Troubleshooting Data Storage and Processing. -Data Engineering troubleshooting approach-Azure Monitoring Capabilities-Troubleshoot common data issues-Troubleshoot common data processing issues Module 10: Integrating and Optimizing Data Platforms. -Integrating data platforms-Optimizing data stores-Optimize streaming data-Manage disaster recovery [-]
Les mer