azure-native.videoanalyzer.PipelineTopology
Explore with Pulumi AI
Pipeline topology describes the processing steps to be applied when processing content for a particular outcome. The topology should be defined according to the scenario to be achieved and can be reused across many pipeline instances which share the same processing characteristics. For instance, a pipeline topology which captures content from a RTSP camera and archives the content can be reused across many different cameras, as long as the same processing is to be applied across all the cameras. Individual instance properties can be defined through the use of user-defined parameters, which allow for a topology to be parameterized. This allows individual pipelines refer to different values, such as individual cameras’ RTSP endpoints and credentials. Overall a topology is composed of the following:
- Parameters: list of user defined parameters that can be references across the topology nodes.
- Sources: list of one or more data sources nodes such as an RTSP source which allows for content to be ingested from cameras.
- Processors: list of nodes which perform data analysis or transformations.
- Sinks: list of one or more data sinks which allow for data to be stored or exported to other destinations. Azure REST API version: 2021-11-01-preview. Prior API version in Azure Native 1.x: 2021-11-01-preview.
Example Usage
Create or update a pipeline topology with an Rtsp source and video sink.
using System.Collections.Generic;
using System.Linq;
using Pulumi;
using AzureNative = Pulumi.AzureNative;
return await Deployment.RunAsync(() =>
{
var pipelineTopology = new AzureNative.VideoAnalyzer.PipelineTopology("pipelineTopology", new()
{
AccountName = "testaccount2",
Description = "Pipeline Topology 1 Description",
Kind = AzureNative.VideoAnalyzer.Kind.Live,
Parameters = new[]
{
new AzureNative.VideoAnalyzer.Inputs.ParameterDeclarationArgs
{
Default = "rtsp://microsoft.com/video.mp4",
Description = "rtsp source url parameter",
Name = "rtspUrlParameter",
Type = AzureNative.VideoAnalyzer.ParameterType.String,
},
new AzureNative.VideoAnalyzer.Inputs.ParameterDeclarationArgs
{
Default = "password",
Description = "rtsp source password parameter",
Name = "rtspPasswordParameter",
Type = AzureNative.VideoAnalyzer.ParameterType.SecretString,
},
},
PipelineTopologyName = "pipelineTopology1",
ResourceGroupName = "testrg",
Sinks = new[]
{
new AzureNative.VideoAnalyzer.Inputs.VideoSinkArgs
{
Inputs = new[]
{
new AzureNative.VideoAnalyzer.Inputs.NodeInputArgs
{
NodeName = "rtspSource",
},
},
Name = "videoSink",
Type = "#Microsoft.VideoAnalyzer.VideoSink",
VideoCreationProperties = new AzureNative.VideoAnalyzer.Inputs.VideoCreationPropertiesArgs
{
Description = "Parking lot south entrance",
SegmentLength = "PT30S",
Title = "Parking Lot (Camera 1)",
},
VideoName = "camera001",
VideoPublishingOptions = new AzureNative.VideoAnalyzer.Inputs.VideoPublishingOptionsArgs
{
DisableArchive = "false",
DisableRtspPublishing = "true",
},
},
},
Sku = new AzureNative.VideoAnalyzer.Inputs.SkuArgs
{
Name = AzureNative.VideoAnalyzer.SkuName.Live_S1,
},
Sources = new[]
{
new AzureNative.VideoAnalyzer.Inputs.RtspSourceArgs
{
Endpoint = new AzureNative.VideoAnalyzer.Inputs.UnsecuredEndpointArgs
{
Credentials = new AzureNative.VideoAnalyzer.Inputs.UsernamePasswordCredentialsArgs
{
Password = "${rtspPasswordParameter}",
Type = "#Microsoft.VideoAnalyzer.UsernamePasswordCredentials",
Username = "username",
},
Type = "#Microsoft.VideoAnalyzer.UnsecuredEndpoint",
Url = "${rtspUrlParameter}",
},
Name = "rtspSource",
Transport = AzureNative.VideoAnalyzer.RtspTransport.Http,
Type = "#Microsoft.VideoAnalyzer.RtspSource",
},
},
});
});
package main
import (
videoanalyzer "github.com/pulumi/pulumi-azure-native-sdk/videoanalyzer/v2"
"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
)
func main() {
pulumi.Run(func(ctx *pulumi.Context) error {
_, err := videoanalyzer.NewPipelineTopology(ctx, "pipelineTopology", &videoanalyzer.PipelineTopologyArgs{
AccountName: pulumi.String("testaccount2"),
Description: pulumi.String("Pipeline Topology 1 Description"),
Kind: pulumi.String(videoanalyzer.KindLive),
Parameters: videoanalyzer.ParameterDeclarationArray{
&videoanalyzer.ParameterDeclarationArgs{
Default: pulumi.String("rtsp://microsoft.com/video.mp4"),
Description: pulumi.String("rtsp source url parameter"),
Name: pulumi.String("rtspUrlParameter"),
Type: pulumi.String(videoanalyzer.ParameterTypeString),
},
&videoanalyzer.ParameterDeclarationArgs{
Default: pulumi.String("password"),
Description: pulumi.String("rtsp source password parameter"),
Name: pulumi.String("rtspPasswordParameter"),
Type: pulumi.String(videoanalyzer.ParameterTypeSecretString),
},
},
PipelineTopologyName: pulumi.String("pipelineTopology1"),
ResourceGroupName: pulumi.String("testrg"),
Sinks: videoanalyzer.VideoSinkArray{
&videoanalyzer.VideoSinkArgs{
Inputs: videoanalyzer.NodeInputArray{
&videoanalyzer.NodeInputArgs{
NodeName: pulumi.String("rtspSource"),
},
},
Name: pulumi.String("videoSink"),
Type: pulumi.String("#Microsoft.VideoAnalyzer.VideoSink"),
VideoCreationProperties: &videoanalyzer.VideoCreationPropertiesArgs{
Description: pulumi.String("Parking lot south entrance"),
SegmentLength: pulumi.String("PT30S"),
Title: pulumi.String("Parking Lot (Camera 1)"),
},
VideoName: pulumi.String("camera001"),
VideoPublishingOptions: &videoanalyzer.VideoPublishingOptionsArgs{
DisableArchive: pulumi.String("false"),
DisableRtspPublishing: pulumi.String("true"),
},
},
},
Sku: &videoanalyzer.SkuArgs{
Name: pulumi.String(videoanalyzer.SkuName_Live_S1),
},
Sources: pulumi.Array{
videoanalyzer.RtspSource{
Endpoint: videoanalyzer.UnsecuredEndpoint{
Credentials: videoanalyzer.UsernamePasswordCredentials{
Password: "${rtspPasswordParameter}",
Type: "#Microsoft.VideoAnalyzer.UsernamePasswordCredentials",
Username: "username",
},
Type: "#Microsoft.VideoAnalyzer.UnsecuredEndpoint",
Url: "${rtspUrlParameter}",
},
Name: "rtspSource",
Transport: videoanalyzer.RtspTransportHttp,
Type: "#Microsoft.VideoAnalyzer.RtspSource",
},
},
})
if err != nil {
return err
}
return nil
})
}
package generated_program;
import com.pulumi.Context;
import com.pulumi.Pulumi;
import com.pulumi.core.Output;
import com.pulumi.azurenative.videoanalyzer.PipelineTopology;
import com.pulumi.azurenative.videoanalyzer.PipelineTopologyArgs;
import com.pulumi.azurenative.videoanalyzer.inputs.ParameterDeclarationArgs;
import com.pulumi.azurenative.videoanalyzer.inputs.VideoSinkArgs;
import com.pulumi.azurenative.videoanalyzer.inputs.VideoCreationPropertiesArgs;
import com.pulumi.azurenative.videoanalyzer.inputs.VideoPublishingOptionsArgs;
import com.pulumi.azurenative.videoanalyzer.inputs.SkuArgs;
import java.util.List;
import java.util.ArrayList;
import java.util.Map;
import java.io.File;
import java.nio.file.Files;
import java.nio.file.Paths;
public class App {
public static void main(String[] args) {
Pulumi.run(App::stack);
}
public static void stack(Context ctx) {
var pipelineTopology = new PipelineTopology("pipelineTopology", PipelineTopologyArgs.builder()
.accountName("testaccount2")
.description("Pipeline Topology 1 Description")
.kind("Live")
.parameters(
ParameterDeclarationArgs.builder()
.default_("rtsp://microsoft.com/video.mp4")
.description("rtsp source url parameter")
.name("rtspUrlParameter")
.type("String")
.build(),
ParameterDeclarationArgs.builder()
.default_("password")
.description("rtsp source password parameter")
.name("rtspPasswordParameter")
.type("SecretString")
.build())
.pipelineTopologyName("pipelineTopology1")
.resourceGroupName("testrg")
.sinks(VideoSinkArgs.builder()
.inputs(NodeInputArgs.builder()
.nodeName("rtspSource")
.build())
.name("videoSink")
.type("#Microsoft.VideoAnalyzer.VideoSink")
.videoCreationProperties(VideoCreationPropertiesArgs.builder()
.description("Parking lot south entrance")
.segmentLength("PT30S")
.title("Parking Lot (Camera 1)")
.build())
.videoName("camera001")
.videoPublishingOptions(VideoPublishingOptionsArgs.builder()
.disableArchive("false")
.disableRtspPublishing("true")
.build())
.build())
.sku(SkuArgs.builder()
.name("Live_S1")
.build())
.sources(RtspSourceArgs.builder()
.endpoint(TlsEndpointArgs.builder()
.credentials(UsernamePasswordCredentialsArgs.builder()
.password("${rtspPasswordParameter}")
.type("#Microsoft.VideoAnalyzer.UsernamePasswordCredentials")
.username("username")
.build())
.type("#Microsoft.VideoAnalyzer.UnsecuredEndpoint")
.url("${rtspUrlParameter}")
.build())
.name("rtspSource")
.transport("Http")
.type("#Microsoft.VideoAnalyzer.RtspSource")
.build())
.build());
}
}
import pulumi
import pulumi_azure_native as azure_native
pipeline_topology = azure_native.videoanalyzer.PipelineTopology("pipelineTopology",
account_name="testaccount2",
description="Pipeline Topology 1 Description",
kind=azure_native.videoanalyzer.Kind.LIVE,
parameters=[
{
"default": "rtsp://microsoft.com/video.mp4",
"description": "rtsp source url parameter",
"name": "rtspUrlParameter",
"type": azure_native.videoanalyzer.ParameterType.STRING,
},
{
"default": "password",
"description": "rtsp source password parameter",
"name": "rtspPasswordParameter",
"type": azure_native.videoanalyzer.ParameterType.SECRET_STRING,
},
],
pipeline_topology_name="pipelineTopology1",
resource_group_name="testrg",
sinks=[{
"inputs": [{
"node_name": "rtspSource",
}],
"name": "videoSink",
"type": "#Microsoft.VideoAnalyzer.VideoSink",
"video_creation_properties": {
"description": "Parking\u202flot\u202fsouth\u202fentrance",
"segment_length": "PT30S",
"title": "Parking\u202fLot\u202f(Camera\u202f1)",
},
"video_name": "camera001",
"video_publishing_options": {
"disable_archive": "false",
"disable_rtsp_publishing": "true",
},
}],
sku={
"name": azure_native.videoanalyzer.SkuName.LIVE_S1,
},
sources=[{
"endpoint": {
"credentials": {
"password": "${rtspPasswordParameter}",
"type": "#Microsoft.VideoAnalyzer.UsernamePasswordCredentials",
"username": "username",
},
"type": "#Microsoft.VideoAnalyzer.UnsecuredEndpoint",
"url": "${rtspUrlParameter}",
},
"name": "rtspSource",
"transport": azure_native.videoanalyzer.RtspTransport.HTTP,
"type": "#Microsoft.VideoAnalyzer.RtspSource",
}])
import * as pulumi from "@pulumi/pulumi";
import * as azure_native from "@pulumi/azure-native";
const pipelineTopology = new azure_native.videoanalyzer.PipelineTopology("pipelineTopology", {
accountName: "testaccount2",
description: "Pipeline Topology 1 Description",
kind: azure_native.videoanalyzer.Kind.Live,
parameters: [
{
"default": "rtsp://microsoft.com/video.mp4",
description: "rtsp source url parameter",
name: "rtspUrlParameter",
type: azure_native.videoanalyzer.ParameterType.String,
},
{
"default": "password",
description: "rtsp source password parameter",
name: "rtspPasswordParameter",
type: azure_native.videoanalyzer.ParameterType.SecretString,
},
],
pipelineTopologyName: "pipelineTopology1",
resourceGroupName: "testrg",
sinks: [{
inputs: [{
nodeName: "rtspSource",
}],
name: "videoSink",
type: "#Microsoft.VideoAnalyzer.VideoSink",
videoCreationProperties: {
description: "Parking\u202flot\u202fsouth\u202fentrance",
segmentLength: "PT30S",
title: "Parking\u202fLot\u202f(Camera\u202f1)",
},
videoName: "camera001",
videoPublishingOptions: {
disableArchive: "false",
disableRtspPublishing: "true",
},
}],
sku: {
name: azure_native.videoanalyzer.SkuName.Live_S1,
},
sources: [{
endpoint: {
credentials: {
password: "${rtspPasswordParameter}",
type: "#Microsoft.VideoAnalyzer.UsernamePasswordCredentials",
username: "username",
},
type: "#Microsoft.VideoAnalyzer.UnsecuredEndpoint",
url: "${rtspUrlParameter}",
},
name: "rtspSource",
transport: azure_native.videoanalyzer.RtspTransport.Http,
type: "#Microsoft.VideoAnalyzer.RtspSource",
}],
});
resources:
pipelineTopology:
type: azure-native:videoanalyzer:PipelineTopology
properties:
accountName: testaccount2
description: Pipeline Topology 1 Description
kind: Live
parameters:
- default: rtsp://microsoft.com/video.mp4
description: rtsp source url parameter
name: rtspUrlParameter
type: String
- default: password
description: rtsp source password parameter
name: rtspPasswordParameter
type: SecretString
pipelineTopologyName: pipelineTopology1
resourceGroupName: testrg
sinks:
- inputs:
- nodeName: rtspSource
name: videoSink
type: '#Microsoft.VideoAnalyzer.VideoSink'
videoCreationProperties:
description: Parking lot south entrance
segmentLength: PT30S
title: Parking Lot (Camera 1)
videoName: camera001
videoPublishingOptions:
disableArchive: 'false'
disableRtspPublishing: 'true'
sku:
name: Live_S1
sources:
- endpoint:
credentials:
password: ${rtspPasswordParameter}
type: '#Microsoft.VideoAnalyzer.UsernamePasswordCredentials'
username: username
type: '#Microsoft.VideoAnalyzer.UnsecuredEndpoint'
url: ${rtspUrlParameter}
name: rtspSource
transport: Http
type: '#Microsoft.VideoAnalyzer.RtspSource'
Create PipelineTopology Resource
Resources are created with functions called constructors. To learn more about declaring and configuring resources, see Resources.
Constructor syntax
new PipelineTopology(name: string, args: PipelineTopologyArgs, opts?: CustomResourceOptions);
@overload
def PipelineTopology(resource_name: str,
args: PipelineTopologyArgs,
opts: Optional[ResourceOptions] = None)
@overload
def PipelineTopology(resource_name: str,
opts: Optional[ResourceOptions] = None,
account_name: Optional[str] = None,
kind: Optional[Union[str, Kind]] = None,
resource_group_name: Optional[str] = None,
sinks: Optional[Sequence[VideoSinkArgs]] = None,
sku: Optional[SkuArgs] = None,
sources: Optional[Sequence[Union[RtspSourceArgs, VideoSourceArgs]]] = None,
description: Optional[str] = None,
parameters: Optional[Sequence[ParameterDeclarationArgs]] = None,
pipeline_topology_name: Optional[str] = None,
processors: Optional[Sequence[EncoderProcessorArgs]] = None)
func NewPipelineTopology(ctx *Context, name string, args PipelineTopologyArgs, opts ...ResourceOption) (*PipelineTopology, error)
public PipelineTopology(string name, PipelineTopologyArgs args, CustomResourceOptions? opts = null)
public PipelineTopology(String name, PipelineTopologyArgs args)
public PipelineTopology(String name, PipelineTopologyArgs args, CustomResourceOptions options)
type: azure-native:videoanalyzer:PipelineTopology
properties: # The arguments to resource properties.
options: # Bag of options to control resource's behavior.
Parameters
- name string
- The unique name of the resource.
- args PipelineTopologyArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- resource_name str
- The unique name of the resource.
- args PipelineTopologyArgs
- The arguments to resource properties.
- opts ResourceOptions
- Bag of options to control resource's behavior.
- ctx Context
- Context object for the current deployment.
- name string
- The unique name of the resource.
- args PipelineTopologyArgs
- The arguments to resource properties.
- opts ResourceOption
- Bag of options to control resource's behavior.
- name string
- The unique name of the resource.
- args PipelineTopologyArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- name String
- The unique name of the resource.
- args PipelineTopologyArgs
- The arguments to resource properties.
- options CustomResourceOptions
- Bag of options to control resource's behavior.
Constructor example
The following reference example uses placeholder values for all input properties.
var pipelineTopologyResource = new AzureNative.VideoAnalyzer.PipelineTopology("pipelineTopologyResource", new()
{
AccountName = "string",
Kind = "string",
ResourceGroupName = "string",
Sinks = new[]
{
new AzureNative.VideoAnalyzer.Inputs.VideoSinkArgs
{
Inputs = new[]
{
new AzureNative.VideoAnalyzer.Inputs.NodeInputArgs
{
NodeName = "string",
},
},
Name = "string",
Type = "#Microsoft.VideoAnalyzer.VideoSink",
VideoName = "string",
VideoCreationProperties = new AzureNative.VideoAnalyzer.Inputs.VideoCreationPropertiesArgs
{
Description = "string",
RetentionPeriod = "string",
SegmentLength = "string",
Title = "string",
},
VideoPublishingOptions = new AzureNative.VideoAnalyzer.Inputs.VideoPublishingOptionsArgs
{
DisableArchive = "string",
DisableRtspPublishing = "string",
},
},
},
Sku = new AzureNative.VideoAnalyzer.Inputs.SkuArgs
{
Name = "string",
},
Sources = new[]
{
new AzureNative.VideoAnalyzer.Inputs.RtspSourceArgs
{
Endpoint = new AzureNative.VideoAnalyzer.Inputs.TlsEndpointArgs
{
Credentials = new AzureNative.VideoAnalyzer.Inputs.UsernamePasswordCredentialsArgs
{
Password = "string",
Type = "#Microsoft.VideoAnalyzer.UsernamePasswordCredentials",
Username = "string",
},
Type = "#Microsoft.VideoAnalyzer.TlsEndpoint",
Url = "string",
TrustedCertificates = new AzureNative.VideoAnalyzer.Inputs.PemCertificateListArgs
{
Certificates = new[]
{
"string",
},
Type = "#Microsoft.VideoAnalyzer.PemCertificateList",
},
Tunnel = new AzureNative.VideoAnalyzer.Inputs.SecureIotDeviceRemoteTunnelArgs
{
DeviceId = "string",
IotHubName = "string",
Type = "#Microsoft.VideoAnalyzer.SecureIotDeviceRemoteTunnel",
},
ValidationOptions = new AzureNative.VideoAnalyzer.Inputs.TlsValidationOptionsArgs
{
IgnoreHostname = "string",
IgnoreSignature = "string",
},
},
Name = "string",
Type = "#Microsoft.VideoAnalyzer.RtspSource",
Transport = "string",
},
},
Description = "string",
Parameters = new[]
{
new AzureNative.VideoAnalyzer.Inputs.ParameterDeclarationArgs
{
Name = "string",
Type = "string",
Default = "string",
Description = "string",
},
},
PipelineTopologyName = "string",
Processors = new[]
{
new AzureNative.VideoAnalyzer.Inputs.EncoderProcessorArgs
{
Inputs = new[]
{
new AzureNative.VideoAnalyzer.Inputs.NodeInputArgs
{
NodeName = "string",
},
},
Name = "string",
Preset = new AzureNative.VideoAnalyzer.Inputs.EncoderCustomPresetArgs
{
Type = "#Microsoft.VideoAnalyzer.EncoderCustomPreset",
AudioEncoder = new AzureNative.VideoAnalyzer.Inputs.AudioEncoderAacArgs
{
Type = "#Microsoft.VideoAnalyzer.AudioEncoderAac",
BitrateKbps = "string",
},
VideoEncoder = new AzureNative.VideoAnalyzer.Inputs.VideoEncoderH264Args
{
Type = "#Microsoft.VideoAnalyzer.VideoEncoderH264",
BitrateKbps = "string",
FrameRate = "string",
Scale = new AzureNative.VideoAnalyzer.Inputs.VideoScaleArgs
{
Height = "string",
Mode = "string",
Width = "string",
},
},
},
Type = "#Microsoft.VideoAnalyzer.EncoderProcessor",
},
},
});
example, err := videoanalyzer.NewPipelineTopology(ctx, "pipelineTopologyResource", &videoanalyzer.PipelineTopologyArgs{
AccountName: pulumi.String("string"),
Kind: pulumi.String("string"),
ResourceGroupName: pulumi.String("string"),
Sinks: videoanalyzer.VideoSinkArray{
&videoanalyzer.VideoSinkArgs{
Inputs: videoanalyzer.NodeInputArray{
&videoanalyzer.NodeInputArgs{
NodeName: pulumi.String("string"),
},
},
Name: pulumi.String("string"),
Type: pulumi.String("#Microsoft.VideoAnalyzer.VideoSink"),
VideoName: pulumi.String("string"),
VideoCreationProperties: &videoanalyzer.VideoCreationPropertiesArgs{
Description: pulumi.String("string"),
RetentionPeriod: pulumi.String("string"),
SegmentLength: pulumi.String("string"),
Title: pulumi.String("string"),
},
VideoPublishingOptions: &videoanalyzer.VideoPublishingOptionsArgs{
DisableArchive: pulumi.String("string"),
DisableRtspPublishing: pulumi.String("string"),
},
},
},
Sku: &videoanalyzer.SkuArgs{
Name: pulumi.String("string"),
},
Sources: pulumi.Array{
videoanalyzer.RtspSource{
Endpoint: videoanalyzer.TlsEndpoint{
Credentials: videoanalyzer.UsernamePasswordCredentials{
Password: "string",
Type: "#Microsoft.VideoAnalyzer.UsernamePasswordCredentials",
Username: "string",
},
Type: "#Microsoft.VideoAnalyzer.TlsEndpoint",
Url: "string",
TrustedCertificates: videoanalyzer.PemCertificateList{
Certificates: []string{
"string",
},
Type: "#Microsoft.VideoAnalyzer.PemCertificateList",
},
Tunnel: videoanalyzer.SecureIotDeviceRemoteTunnel{
DeviceId: "string",
IotHubName: "string",
Type: "#Microsoft.VideoAnalyzer.SecureIotDeviceRemoteTunnel",
},
ValidationOptions: videoanalyzer.TlsValidationOptions{
IgnoreHostname: "string",
IgnoreSignature: "string",
},
},
Name: "string",
Type: "#Microsoft.VideoAnalyzer.RtspSource",
Transport: "string",
},
},
Description: pulumi.String("string"),
Parameters: videoanalyzer.ParameterDeclarationArray{
&videoanalyzer.ParameterDeclarationArgs{
Name: pulumi.String("string"),
Type: pulumi.String("string"),
Default: pulumi.String("string"),
Description: pulumi.String("string"),
},
},
PipelineTopologyName: pulumi.String("string"),
Processors: videoanalyzer.EncoderProcessorArray{
&videoanalyzer.EncoderProcessorArgs{
Inputs: videoanalyzer.NodeInputArray{
&videoanalyzer.NodeInputArgs{
NodeName: pulumi.String("string"),
},
},
Name: pulumi.String("string"),
Preset: videoanalyzer.EncoderCustomPreset{
Type: "#Microsoft.VideoAnalyzer.EncoderCustomPreset",
AudioEncoder: videoanalyzer.AudioEncoderAac{
Type: "#Microsoft.VideoAnalyzer.AudioEncoderAac",
BitrateKbps: "string",
},
VideoEncoder: videoanalyzer.VideoEncoderH264{
Type: "#Microsoft.VideoAnalyzer.VideoEncoderH264",
BitrateKbps: "string",
FrameRate: "string",
Scale: videoanalyzer.VideoScale{
Height: "string",
Mode: "string",
Width: "string",
},
},
},
Type: pulumi.String("#Microsoft.VideoAnalyzer.EncoderProcessor"),
},
},
})
var pipelineTopologyResource = new PipelineTopology("pipelineTopologyResource", PipelineTopologyArgs.builder()
.accountName("string")
.kind("string")
.resourceGroupName("string")
.sinks(VideoSinkArgs.builder()
.inputs(NodeInputArgs.builder()
.nodeName("string")
.build())
.name("string")
.type("#Microsoft.VideoAnalyzer.VideoSink")
.videoName("string")
.videoCreationProperties(VideoCreationPropertiesArgs.builder()
.description("string")
.retentionPeriod("string")
.segmentLength("string")
.title("string")
.build())
.videoPublishingOptions(VideoPublishingOptionsArgs.builder()
.disableArchive("string")
.disableRtspPublishing("string")
.build())
.build())
.sku(SkuArgs.builder()
.name("string")
.build())
.sources(RtspSourceArgs.builder()
.endpoint(TlsEndpointArgs.builder()
.credentials(UsernamePasswordCredentialsArgs.builder()
.password("string")
.type("#Microsoft.VideoAnalyzer.UsernamePasswordCredentials")
.username("string")
.build())
.type("#Microsoft.VideoAnalyzer.TlsEndpoint")
.url("string")
.trustedCertificates(PemCertificateListArgs.builder()
.certificates("string")
.type("#Microsoft.VideoAnalyzer.PemCertificateList")
.build())
.tunnel(SecureIotDeviceRemoteTunnelArgs.builder()
.deviceId("string")
.iotHubName("string")
.type("#Microsoft.VideoAnalyzer.SecureIotDeviceRemoteTunnel")
.build())
.validationOptions(TlsValidationOptionsArgs.builder()
.ignoreHostname("string")
.ignoreSignature("string")
.build())
.build())
.name("string")
.type("#Microsoft.VideoAnalyzer.RtspSource")
.transport("string")
.build())
.description("string")
.parameters(ParameterDeclarationArgs.builder()
.name("string")
.type("string")
.default_("string")
.description("string")
.build())
.pipelineTopologyName("string")
.processors(EncoderProcessorArgs.builder()
.inputs(NodeInputArgs.builder()
.nodeName("string")
.build())
.name("string")
.preset(EncoderCustomPresetArgs.builder()
.type("#Microsoft.VideoAnalyzer.EncoderCustomPreset")
.audioEncoder(AudioEncoderAacArgs.builder()
.type("#Microsoft.VideoAnalyzer.AudioEncoderAac")
.bitrateKbps("string")
.build())
.videoEncoder(VideoEncoderH264Args.builder()
.type("#Microsoft.VideoAnalyzer.VideoEncoderH264")
.bitrateKbps("string")
.frameRate("string")
.scale(VideoScaleArgs.builder()
.height("string")
.mode("string")
.width("string")
.build())
.build())
.build())
.type("#Microsoft.VideoAnalyzer.EncoderProcessor")
.build())
.build());
pipeline_topology_resource = azure_native.videoanalyzer.PipelineTopology("pipelineTopologyResource",
account_name="string",
kind="string",
resource_group_name="string",
sinks=[{
"inputs": [{
"nodeName": "string",
}],
"name": "string",
"type": "#Microsoft.VideoAnalyzer.VideoSink",
"videoName": "string",
"videoCreationProperties": {
"description": "string",
"retentionPeriod": "string",
"segmentLength": "string",
"title": "string",
},
"videoPublishingOptions": {
"disableArchive": "string",
"disableRtspPublishing": "string",
},
}],
sku={
"name": "string",
},
sources=[{
"endpoint": {
"credentials": {
"password": "string",
"type": "#Microsoft.VideoAnalyzer.UsernamePasswordCredentials",
"username": "string",
},
"type": "#Microsoft.VideoAnalyzer.TlsEndpoint",
"url": "string",
"trustedCertificates": {
"certificates": ["string"],
"type": "#Microsoft.VideoAnalyzer.PemCertificateList",
},
"tunnel": {
"deviceId": "string",
"iotHubName": "string",
"type": "#Microsoft.VideoAnalyzer.SecureIotDeviceRemoteTunnel",
},
"validationOptions": {
"ignoreHostname": "string",
"ignoreSignature": "string",
},
},
"name": "string",
"type": "#Microsoft.VideoAnalyzer.RtspSource",
"transport": "string",
}],
description="string",
parameters=[{
"name": "string",
"type": "string",
"default": "string",
"description": "string",
}],
pipeline_topology_name="string",
processors=[{
"inputs": [{
"nodeName": "string",
}],
"name": "string",
"preset": {
"type": "#Microsoft.VideoAnalyzer.EncoderCustomPreset",
"audioEncoder": {
"type": "#Microsoft.VideoAnalyzer.AudioEncoderAac",
"bitrateKbps": "string",
},
"videoEncoder": {
"type": "#Microsoft.VideoAnalyzer.VideoEncoderH264",
"bitrateKbps": "string",
"frameRate": "string",
"scale": {
"height": "string",
"mode": "string",
"width": "string",
},
},
},
"type": "#Microsoft.VideoAnalyzer.EncoderProcessor",
}])
const pipelineTopologyResource = new azure_native.videoanalyzer.PipelineTopology("pipelineTopologyResource", {
accountName: "string",
kind: "string",
resourceGroupName: "string",
sinks: [{
inputs: [{
nodeName: "string",
}],
name: "string",
type: "#Microsoft.VideoAnalyzer.VideoSink",
videoName: "string",
videoCreationProperties: {
description: "string",
retentionPeriod: "string",
segmentLength: "string",
title: "string",
},
videoPublishingOptions: {
disableArchive: "string",
disableRtspPublishing: "string",
},
}],
sku: {
name: "string",
},
sources: [{
endpoint: {
credentials: {
password: "string",
type: "#Microsoft.VideoAnalyzer.UsernamePasswordCredentials",
username: "string",
},
type: "#Microsoft.VideoAnalyzer.TlsEndpoint",
url: "string",
trustedCertificates: {
certificates: ["string"],
type: "#Microsoft.VideoAnalyzer.PemCertificateList",
},
tunnel: {
deviceId: "string",
iotHubName: "string",
type: "#Microsoft.VideoAnalyzer.SecureIotDeviceRemoteTunnel",
},
validationOptions: {
ignoreHostname: "string",
ignoreSignature: "string",
},
},
name: "string",
type: "#Microsoft.VideoAnalyzer.RtspSource",
transport: "string",
}],
description: "string",
parameters: [{
name: "string",
type: "string",
"default": "string",
description: "string",
}],
pipelineTopologyName: "string",
processors: [{
inputs: [{
nodeName: "string",
}],
name: "string",
preset: {
type: "#Microsoft.VideoAnalyzer.EncoderCustomPreset",
audioEncoder: {
type: "#Microsoft.VideoAnalyzer.AudioEncoderAac",
bitrateKbps: "string",
},
videoEncoder: {
type: "#Microsoft.VideoAnalyzer.VideoEncoderH264",
bitrateKbps: "string",
frameRate: "string",
scale: {
height: "string",
mode: "string",
width: "string",
},
},
},
type: "#Microsoft.VideoAnalyzer.EncoderProcessor",
}],
});
type: azure-native:videoanalyzer:PipelineTopology
properties:
accountName: string
description: string
kind: string
parameters:
- default: string
description: string
name: string
type: string
pipelineTopologyName: string
processors:
- inputs:
- nodeName: string
name: string
preset:
audioEncoder:
bitrateKbps: string
type: '#Microsoft.VideoAnalyzer.AudioEncoderAac'
type: '#Microsoft.VideoAnalyzer.EncoderCustomPreset'
videoEncoder:
bitrateKbps: string
frameRate: string
scale:
height: string
mode: string
width: string
type: '#Microsoft.VideoAnalyzer.VideoEncoderH264'
type: '#Microsoft.VideoAnalyzer.EncoderProcessor'
resourceGroupName: string
sinks:
- inputs:
- nodeName: string
name: string
type: '#Microsoft.VideoAnalyzer.VideoSink'
videoCreationProperties:
description: string
retentionPeriod: string
segmentLength: string
title: string
videoName: string
videoPublishingOptions:
disableArchive: string
disableRtspPublishing: string
sku:
name: string
sources:
- endpoint:
credentials:
password: string
type: '#Microsoft.VideoAnalyzer.UsernamePasswordCredentials'
username: string
trustedCertificates:
certificates:
- string
type: '#Microsoft.VideoAnalyzer.PemCertificateList'
tunnel:
deviceId: string
iotHubName: string
type: '#Microsoft.VideoAnalyzer.SecureIotDeviceRemoteTunnel'
type: '#Microsoft.VideoAnalyzer.TlsEndpoint'
url: string
validationOptions:
ignoreHostname: string
ignoreSignature: string
name: string
transport: string
type: '#Microsoft.VideoAnalyzer.RtspSource'
PipelineTopology Resource Properties
To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.
Inputs
The PipelineTopology resource accepts the following input properties:
- Account
Name string - The Azure Video Analyzer account name.
- Kind
string | Pulumi.
Azure Native. Video Analyzer. Kind - Topology kind.
- Resource
Group stringName - The name of the resource group. The name is case insensitive.
- Sinks
List<Pulumi.
Azure Native. Video Analyzer. Inputs. Video Sink> - List of the topology sink nodes. Sink nodes allow pipeline data to be stored or exported.
- Sku
Pulumi.
Azure Native. Video Analyzer. Inputs. Sku - Describes the properties of a SKU.
- Sources
List<Union<Pulumi.
Azure Native. Video Analyzer. Inputs. Rtsp Source, Pulumi. Azure Native. Video Analyzer. Inputs. Video Source Args>> - List of the topology source nodes. Source nodes enable external data to be ingested by the pipeline.
- Description string
- An optional description of the pipeline topology. It is recommended that the expected use of the topology to be described here.
- Parameters
List<Pulumi.
Azure Native. Video Analyzer. Inputs. Parameter Declaration> - List of the topology parameter declarations. Parameters declared here can be referenced throughout the topology nodes through the use of "${PARAMETER_NAME}" string pattern. Parameters can have optional default values and can later be defined in individual instances of the pipeline.
- Pipeline
Topology stringName - Pipeline topology unique identifier.
- Processors
List<Pulumi.
Azure Native. Video Analyzer. Inputs. Encoder Processor> - List of the topology processor nodes. Processor nodes enable pipeline data to be analyzed, processed or transformed.
- Account
Name string - The Azure Video Analyzer account name.
- Kind string | Kind
- Topology kind.
- Resource
Group stringName - The name of the resource group. The name is case insensitive.
- Sinks
[]Video
Sink Args - List of the topology sink nodes. Sink nodes allow pipeline data to be stored or exported.
- Sku
Sku
Args - Describes the properties of a SKU.
- Sources []interface{}
- List of the topology source nodes. Source nodes enable external data to be ingested by the pipeline.
- Description string
- An optional description of the pipeline topology. It is recommended that the expected use of the topology to be described here.
- Parameters
[]Parameter
Declaration Args - List of the topology parameter declarations. Parameters declared here can be referenced throughout the topology nodes through the use of "${PARAMETER_NAME}" string pattern. Parameters can have optional default values and can later be defined in individual instances of the pipeline.
- Pipeline
Topology stringName - Pipeline topology unique identifier.
- Processors
[]Encoder
Processor Args - List of the topology processor nodes. Processor nodes enable pipeline data to be analyzed, processed or transformed.
- account
Name String - The Azure Video Analyzer account name.
- kind String | Kind
- Topology kind.
- resource
Group StringName - The name of the resource group. The name is case insensitive.
- sinks
List<Video
Sink> - List of the topology sink nodes. Sink nodes allow pipeline data to be stored or exported.
- sku Sku
- Describes the properties of a SKU.
- sources
List<Either<Rtsp
Source,Video Source Args>> - List of the topology source nodes. Source nodes enable external data to be ingested by the pipeline.
- description String
- An optional description of the pipeline topology. It is recommended that the expected use of the topology to be described here.
- parameters
List<Parameter
Declaration> - List of the topology parameter declarations. Parameters declared here can be referenced throughout the topology nodes through the use of "${PARAMETER_NAME}" string pattern. Parameters can have optional default values and can later be defined in individual instances of the pipeline.
- pipeline
Topology StringName - Pipeline topology unique identifier.
- processors
List<Encoder
Processor> - List of the topology processor nodes. Processor nodes enable pipeline data to be analyzed, processed or transformed.
- account
Name string - The Azure Video Analyzer account name.
- kind string | Kind
- Topology kind.
- resource
Group stringName - The name of the resource group. The name is case insensitive.
- sinks
Video
Sink[] - List of the topology sink nodes. Sink nodes allow pipeline data to be stored or exported.
- sku Sku
- Describes the properties of a SKU.
- sources
(Rtsp
Source | Video Source Args)[] - List of the topology source nodes. Source nodes enable external data to be ingested by the pipeline.
- description string
- An optional description of the pipeline topology. It is recommended that the expected use of the topology to be described here.
- parameters
Parameter
Declaration[] - List of the topology parameter declarations. Parameters declared here can be referenced throughout the topology nodes through the use of "${PARAMETER_NAME}" string pattern. Parameters can have optional default values and can later be defined in individual instances of the pipeline.
- pipeline
Topology stringName - Pipeline topology unique identifier.
- processors
Encoder
Processor[] - List of the topology processor nodes. Processor nodes enable pipeline data to be analyzed, processed or transformed.
- account_
name str - The Azure Video Analyzer account name.
- kind str | Kind
- Topology kind.
- resource_
group_ strname - The name of the resource group. The name is case insensitive.
- sinks
Sequence[Video
Sink Args] - List of the topology sink nodes. Sink nodes allow pipeline data to be stored or exported.
- sku
Sku
Args - Describes the properties of a SKU.
- sources
Sequence[Union[Rtsp
Source Args, Video Source Args]] - List of the topology source nodes. Source nodes enable external data to be ingested by the pipeline.
- description str
- An optional description of the pipeline topology. It is recommended that the expected use of the topology to be described here.
- parameters
Sequence[Parameter
Declaration Args] - List of the topology parameter declarations. Parameters declared here can be referenced throughout the topology nodes through the use of "${PARAMETER_NAME}" string pattern. Parameters can have optional default values and can later be defined in individual instances of the pipeline.
- pipeline_
topology_ strname - Pipeline topology unique identifier.
- processors
Sequence[Encoder
Processor Args] - List of the topology processor nodes. Processor nodes enable pipeline data to be analyzed, processed or transformed.
- account
Name String - The Azure Video Analyzer account name.
- kind String | "Live" | "Batch"
- Topology kind.
- resource
Group StringName - The name of the resource group. The name is case insensitive.
- sinks List<Property Map>
- List of the topology sink nodes. Sink nodes allow pipeline data to be stored or exported.
- sku Property Map
- Describes the properties of a SKU.
- sources List<Property Map | Property Map>
- List of the topology source nodes. Source nodes enable external data to be ingested by the pipeline.
- description String
- An optional description of the pipeline topology. It is recommended that the expected use of the topology to be described here.
- parameters List<Property Map>
- List of the topology parameter declarations. Parameters declared here can be referenced throughout the topology nodes through the use of "${PARAMETER_NAME}" string pattern. Parameters can have optional default values and can later be defined in individual instances of the pipeline.
- pipeline
Topology StringName - Pipeline topology unique identifier.
- processors List<Property Map>
- List of the topology processor nodes. Processor nodes enable pipeline data to be analyzed, processed or transformed.
Outputs
All input properties are implicitly available as output properties. Additionally, the PipelineTopology resource produces the following output properties:
- Id string
- The provider-assigned unique ID for this managed resource.
- Name string
- The name of the resource
- System
Data Pulumi.Azure Native. Video Analyzer. Outputs. System Data Response - Azure Resource Manager metadata containing createdBy and modifiedBy information.
- Type string
- The type of the resource. E.g. "Microsoft.Compute/virtualMachines" or "Microsoft.Storage/storageAccounts"
- Id string
- The provider-assigned unique ID for this managed resource.
- Name string
- The name of the resource
- System
Data SystemData Response - Azure Resource Manager metadata containing createdBy and modifiedBy information.
- Type string
- The type of the resource. E.g. "Microsoft.Compute/virtualMachines" or "Microsoft.Storage/storageAccounts"
- id String
- The provider-assigned unique ID for this managed resource.
- name String
- The name of the resource
- system
Data SystemData Response - Azure Resource Manager metadata containing createdBy and modifiedBy information.
- type String
- The type of the resource. E.g. "Microsoft.Compute/virtualMachines" or "Microsoft.Storage/storageAccounts"
- id string
- The provider-assigned unique ID for this managed resource.
- name string
- The name of the resource
- system
Data SystemData Response - Azure Resource Manager metadata containing createdBy and modifiedBy information.
- type string
- The type of the resource. E.g. "Microsoft.Compute/virtualMachines" or "Microsoft.Storage/storageAccounts"
- id str
- The provider-assigned unique ID for this managed resource.
- name str
- The name of the resource
- system_
data SystemData Response - Azure Resource Manager metadata containing createdBy and modifiedBy information.
- type str
- The type of the resource. E.g. "Microsoft.Compute/virtualMachines" or "Microsoft.Storage/storageAccounts"
- id String
- The provider-assigned unique ID for this managed resource.
- name String
- The name of the resource
- system
Data Property Map - Azure Resource Manager metadata containing createdBy and modifiedBy information.
- type String
- The type of the resource. E.g. "Microsoft.Compute/virtualMachines" or "Microsoft.Storage/storageAccounts"
Supporting Types
AudioEncoderAac, AudioEncoderAacArgs
- Bitrate
Kbps string - Bitrate, in kilobits per second or Kbps, at which audio should be encoded (2-channel stereo audio at a sampling rate of 48 kHz). Allowed values are 96, 112, 128, 160, 192, 224, and 256. If omitted, the bitrate of the input audio is used.
- Bitrate
Kbps string - Bitrate, in kilobits per second or Kbps, at which audio should be encoded (2-channel stereo audio at a sampling rate of 48 kHz). Allowed values are 96, 112, 128, 160, 192, 224, and 256. If omitted, the bitrate of the input audio is used.
- bitrate
Kbps String - Bitrate, in kilobits per second or Kbps, at which audio should be encoded (2-channel stereo audio at a sampling rate of 48 kHz). Allowed values are 96, 112, 128, 160, 192, 224, and 256. If omitted, the bitrate of the input audio is used.
- bitrate
Kbps string - Bitrate, in kilobits per second or Kbps, at which audio should be encoded (2-channel stereo audio at a sampling rate of 48 kHz). Allowed values are 96, 112, 128, 160, 192, 224, and 256. If omitted, the bitrate of the input audio is used.
- bitrate_
kbps str - Bitrate, in kilobits per second or Kbps, at which audio should be encoded (2-channel stereo audio at a sampling rate of 48 kHz). Allowed values are 96, 112, 128, 160, 192, 224, and 256. If omitted, the bitrate of the input audio is used.
- bitrate
Kbps String - Bitrate, in kilobits per second or Kbps, at which audio should be encoded (2-channel stereo audio at a sampling rate of 48 kHz). Allowed values are 96, 112, 128, 160, 192, 224, and 256. If omitted, the bitrate of the input audio is used.
AudioEncoderAacResponse, AudioEncoderAacResponseArgs
- Bitrate
Kbps string - Bitrate, in kilobits per second or Kbps, at which audio should be encoded (2-channel stereo audio at a sampling rate of 48 kHz). Allowed values are 96, 112, 128, 160, 192, 224, and 256. If omitted, the bitrate of the input audio is used.
- Bitrate
Kbps string - Bitrate, in kilobits per second or Kbps, at which audio should be encoded (2-channel stereo audio at a sampling rate of 48 kHz). Allowed values are 96, 112, 128, 160, 192, 224, and 256. If omitted, the bitrate of the input audio is used.
- bitrate
Kbps String - Bitrate, in kilobits per second or Kbps, at which audio should be encoded (2-channel stereo audio at a sampling rate of 48 kHz). Allowed values are 96, 112, 128, 160, 192, 224, and 256. If omitted, the bitrate of the input audio is used.
- bitrate
Kbps string - Bitrate, in kilobits per second or Kbps, at which audio should be encoded (2-channel stereo audio at a sampling rate of 48 kHz). Allowed values are 96, 112, 128, 160, 192, 224, and 256. If omitted, the bitrate of the input audio is used.
- bitrate_
kbps str - Bitrate, in kilobits per second or Kbps, at which audio should be encoded (2-channel stereo audio at a sampling rate of 48 kHz). Allowed values are 96, 112, 128, 160, 192, 224, and 256. If omitted, the bitrate of the input audio is used.
- bitrate
Kbps String - Bitrate, in kilobits per second or Kbps, at which audio should be encoded (2-channel stereo audio at a sampling rate of 48 kHz). Allowed values are 96, 112, 128, 160, 192, 224, and 256. If omitted, the bitrate of the input audio is used.
EncoderCustomPreset, EncoderCustomPresetArgs
- Audio
Encoder Pulumi.Azure Native. Video Analyzer. Inputs. Audio Encoder Aac - Describes a custom preset for encoding audio.
- Video
Encoder Pulumi.Azure Native. Video Analyzer. Inputs. Video Encoder H264 - Describes a custom preset for encoding video.
- Audio
Encoder AudioEncoder Aac - Describes a custom preset for encoding audio.
- Video
Encoder VideoEncoder H264 - Describes a custom preset for encoding video.
- audio
Encoder AudioEncoder Aac - Describes a custom preset for encoding audio.
- video
Encoder VideoEncoder H264 - Describes a custom preset for encoding video.
- audio
Encoder AudioEncoder Aac - Describes a custom preset for encoding audio.
- video
Encoder VideoEncoder H264 - Describes a custom preset for encoding video.
- audio_
encoder AudioEncoder Aac - Describes a custom preset for encoding audio.
- video_
encoder VideoEncoder H264 - Describes a custom preset for encoding video.
- audio
Encoder Property Map - Describes a custom preset for encoding audio.
- video
Encoder Property Map - Describes a custom preset for encoding video.
EncoderCustomPresetResponse, EncoderCustomPresetResponseArgs
- Audio
Encoder Pulumi.Azure Native. Video Analyzer. Inputs. Audio Encoder Aac Response - Describes a custom preset for encoding audio.
- Video
Encoder Pulumi.Azure Native. Video Analyzer. Inputs. Video Encoder H264Response - Describes a custom preset for encoding video.
- Audio
Encoder AudioEncoder Aac Response - Describes a custom preset for encoding audio.
- Video
Encoder VideoEncoder H264Response - Describes a custom preset for encoding video.
- audio
Encoder AudioEncoder Aac Response - Describes a custom preset for encoding audio.
- video
Encoder VideoEncoder H264Response - Describes a custom preset for encoding video.
- audio
Encoder AudioEncoder Aac Response - Describes a custom preset for encoding audio.
- video
Encoder VideoEncoder H264Response - Describes a custom preset for encoding video.
- audio_
encoder AudioEncoder Aac Response - Describes a custom preset for encoding audio.
- video_
encoder VideoEncoder H264Response - Describes a custom preset for encoding video.
- audio
Encoder Property Map - Describes a custom preset for encoding audio.
- video
Encoder Property Map - Describes a custom preset for encoding video.
EncoderProcessor, EncoderProcessorArgs
- Inputs
List<Pulumi.
Azure Native. Video Analyzer. Inputs. Node Input> - An array of upstream node references within the topology to be used as inputs for this node.
- Name string
- Node name. Must be unique within the topology.
- Preset
Pulumi.
Azure | Pulumi.Native. Video Analyzer. Inputs. Encoder Custom Preset Azure Native. Video Analyzer. Inputs. Encoder System Preset - The encoder preset, which defines the recipe or instructions on how the input content should be processed.
- Inputs
[]Node
Input - An array of upstream node references within the topology to be used as inputs for this node.
- Name string
- Node name. Must be unique within the topology.
- Preset
Encoder
Custom | EncoderPreset System Preset - The encoder preset, which defines the recipe or instructions on how the input content should be processed.
- inputs
List<Node
Input> - An array of upstream node references within the topology to be used as inputs for this node.
- name String
- Node name. Must be unique within the topology.
- preset
Encoder
Custom | EncoderPreset System Preset - The encoder preset, which defines the recipe or instructions on how the input content should be processed.
- inputs
Node
Input[] - An array of upstream node references within the topology to be used as inputs for this node.
- name string
- Node name. Must be unique within the topology.
- preset
Encoder
Custom | EncoderPreset System Preset - The encoder preset, which defines the recipe or instructions on how the input content should be processed.
- inputs
Sequence[Node
Input] - An array of upstream node references within the topology to be used as inputs for this node.
- name str
- Node name. Must be unique within the topology.
- preset
Encoder
Custom | EncoderPreset System Preset - The encoder preset, which defines the recipe or instructions on how the input content should be processed.
- inputs List<Property Map>
- An array of upstream node references within the topology to be used as inputs for this node.
- name String
- Node name. Must be unique within the topology.
- preset Property Map | Property Map
- The encoder preset, which defines the recipe or instructions on how the input content should be processed.
EncoderProcessorResponse, EncoderProcessorResponseArgs
- Inputs
List<Pulumi.
Azure Native. Video Analyzer. Inputs. Node Input Response> - An array of upstream node references within the topology to be used as inputs for this node.
- Name string
- Node name. Must be unique within the topology.
- Preset
Pulumi.
Azure | Pulumi.Native. Video Analyzer. Inputs. Encoder Custom Preset Response Azure Native. Video Analyzer. Inputs. Encoder System Preset Response - The encoder preset, which defines the recipe or instructions on how the input content should be processed.
- Inputs
[]Node
Input Response - An array of upstream node references within the topology to be used as inputs for this node.
- Name string
- Node name. Must be unique within the topology.
- Preset
Encoder
Custom | EncoderPreset Response System Preset Response - The encoder preset, which defines the recipe or instructions on how the input content should be processed.
- inputs
List<Node
Input Response> - An array of upstream node references within the topology to be used as inputs for this node.
- name String
- Node name. Must be unique within the topology.
- preset
Encoder
Custom | EncoderPreset Response System Preset Response - The encoder preset, which defines the recipe or instructions on how the input content should be processed.
- inputs
Node
Input Response[] - An array of upstream node references within the topology to be used as inputs for this node.
- name string
- Node name. Must be unique within the topology.
- preset
Encoder
Custom | EncoderPreset Response System Preset Response - The encoder preset, which defines the recipe or instructions on how the input content should be processed.
- inputs
Sequence[Node
Input Response] - An array of upstream node references within the topology to be used as inputs for this node.
- name str
- Node name. Must be unique within the topology.
- preset
Encoder
Custom | EncoderPreset Response System Preset Response - The encoder preset, which defines the recipe or instructions on how the input content should be processed.
- inputs List<Property Map>
- An array of upstream node references within the topology to be used as inputs for this node.
- name String
- Node name. Must be unique within the topology.
- preset Property Map | Property Map
- The encoder preset, which defines the recipe or instructions on how the input content should be processed.
EncoderSystemPreset, EncoderSystemPresetArgs
- Name
string | Pulumi.
Azure Native. Video Analyzer. Encoder System Preset Type - Name of the built-in encoding preset.
- Name
string | Encoder
System Preset Type - Name of the built-in encoding preset.
- name
String | Encoder
System Preset Type - Name of the built-in encoding preset.
- name
string | Encoder
System Preset Type - Name of the built-in encoding preset.
- name
str | Encoder
System Preset Type - Name of the built-in encoding preset.
- name
String | "Single
Layer_540p_H264_AAC" | "Single Layer_720p_H264_AAC" | "Single Layer_1080p_H264_AAC" | "Single Layer_2160p_H264_AAC" - Name of the built-in encoding preset.
EncoderSystemPresetResponse, EncoderSystemPresetResponseArgs
- Name string
- Name of the built-in encoding preset.
- Name string
- Name of the built-in encoding preset.
- name String
- Name of the built-in encoding preset.
- name string
- Name of the built-in encoding preset.
- name str
- Name of the built-in encoding preset.
- name String
- Name of the built-in encoding preset.
EncoderSystemPresetType, EncoderSystemPresetTypeArgs
- Single
Layer_540p_H264_AAC - SingleLayer_540p_H264_AACProduces an MP4 file where the video is encoded with H.264 codec at a picture height of 540 pixels, and at a maximum bitrate of 2000 Kbps. Encoded video has the same average frame rate as the input. The aspect ratio of the input is preserved. If the input content has audio, then it is encoded with AAC-LC codec at 96 Kbps
- Single
Layer_720p_H264_AAC - SingleLayer_720p_H264_AACProduces an MP4 file where the video is encoded with H.264 codec at a picture height of 720 pixels, and at a maximum bitrate of 3500 Kbps. Encoded video has the same average frame rate as the input. The aspect ratio of the input is preserved. If the input content has audio, then it is encoded with AAC-LC codec at 96 Kbps
- Single
Layer_1080p_H264_AAC - SingleLayer_1080p_H264_AACProduces an MP4 file where the video is encoded with H.264 codec at a picture height of 1080 pixels, and at a maximum bitrate of 6000 Kbps. Encoded video has the same average frame rate as the input. The aspect ratio of the input is preserved. If the input content has audio, then it is encoded with AAC-LC codec at 128 Kbps
- Single
Layer_2160p_H264_AAC - SingleLayer_2160p_H264_AACProduces an MP4 file where the video is encoded with H.264 codec at a picture height of 2160 pixels, and at a maximum bitrate of 16000 Kbps. Encoded video has the same average frame rate as the input. The aspect ratio of the input is preserved. If the input content has audio, then it is encoded with AAC-LC codec at 128 Kbps
- Encoder
System Preset Type_Single Layer_540p_H264_AAC - SingleLayer_540p_H264_AACProduces an MP4 file where the video is encoded with H.264 codec at a picture height of 540 pixels, and at a maximum bitrate of 2000 Kbps. Encoded video has the same average frame rate as the input. The aspect ratio of the input is preserved. If the input content has audio, then it is encoded with AAC-LC codec at 96 Kbps
- Encoder
System Preset Type_Single Layer_720p_H264_AAC - SingleLayer_720p_H264_AACProduces an MP4 file where the video is encoded with H.264 codec at a picture height of 720 pixels, and at a maximum bitrate of 3500 Kbps. Encoded video has the same average frame rate as the input. The aspect ratio of the input is preserved. If the input content has audio, then it is encoded with AAC-LC codec at 96 Kbps
- Encoder
System Preset Type_Single Layer_1080p_H264_AAC - SingleLayer_1080p_H264_AACProduces an MP4 file where the video is encoded with H.264 codec at a picture height of 1080 pixels, and at a maximum bitrate of 6000 Kbps. Encoded video has the same average frame rate as the input. The aspect ratio of the input is preserved. If the input content has audio, then it is encoded with AAC-LC codec at 128 Kbps
- Encoder
System Preset Type_Single Layer_2160p_H264_AAC - SingleLayer_2160p_H264_AACProduces an MP4 file where the video is encoded with H.264 codec at a picture height of 2160 pixels, and at a maximum bitrate of 16000 Kbps. Encoded video has the same average frame rate as the input. The aspect ratio of the input is preserved. If the input content has audio, then it is encoded with AAC-LC codec at 128 Kbps
- Single
Layer_540p_H264_AAC - SingleLayer_540p_H264_AACProduces an MP4 file where the video is encoded with H.264 codec at a picture height of 540 pixels, and at a maximum bitrate of 2000 Kbps. Encoded video has the same average frame rate as the input. The aspect ratio of the input is preserved. If the input content has audio, then it is encoded with AAC-LC codec at 96 Kbps
- Single
Layer_720p_H264_AAC - SingleLayer_720p_H264_AACProduces an MP4 file where the video is encoded with H.264 codec at a picture height of 720 pixels, and at a maximum bitrate of 3500 Kbps. Encoded video has the same average frame rate as the input. The aspect ratio of the input is preserved. If the input content has audio, then it is encoded with AAC-LC codec at 96 Kbps
- Single
Layer_1080p_H264_AAC - SingleLayer_1080p_H264_AACProduces an MP4 file where the video is encoded with H.264 codec at a picture height of 1080 pixels, and at a maximum bitrate of 6000 Kbps. Encoded video has the same average frame rate as the input. The aspect ratio of the input is preserved. If the input content has audio, then it is encoded with AAC-LC codec at 128 Kbps
- Single
Layer_2160p_H264_AAC - SingleLayer_2160p_H264_AACProduces an MP4 file where the video is encoded with H.264 codec at a picture height of 2160 pixels, and at a maximum bitrate of 16000 Kbps. Encoded video has the same average frame rate as the input. The aspect ratio of the input is preserved. If the input content has audio, then it is encoded with AAC-LC codec at 128 Kbps
- Single
Layer_540p_H264_AAC - SingleLayer_540p_H264_AACProduces an MP4 file where the video is encoded with H.264 codec at a picture height of 540 pixels, and at a maximum bitrate of 2000 Kbps. Encoded video has the same average frame rate as the input. The aspect ratio of the input is preserved. If the input content has audio, then it is encoded with AAC-LC codec at 96 Kbps
- Single
Layer_720p_H264_AAC - SingleLayer_720p_H264_AACProduces an MP4 file where the video is encoded with H.264 codec at a picture height of 720 pixels, and at a maximum bitrate of 3500 Kbps. Encoded video has the same average frame rate as the input. The aspect ratio of the input is preserved. If the input content has audio, then it is encoded with AAC-LC codec at 96 Kbps
- Single
Layer_1080p_H264_AAC - SingleLayer_1080p_H264_AACProduces an MP4 file where the video is encoded with H.264 codec at a picture height of 1080 pixels, and at a maximum bitrate of 6000 Kbps. Encoded video has the same average frame rate as the input. The aspect ratio of the input is preserved. If the input content has audio, then it is encoded with AAC-LC codec at 128 Kbps
- Single
Layer_2160p_H264_AAC - SingleLayer_2160p_H264_AACProduces an MP4 file where the video is encoded with H.264 codec at a picture height of 2160 pixels, and at a maximum bitrate of 16000 Kbps. Encoded video has the same average frame rate as the input. The aspect ratio of the input is preserved. If the input content has audio, then it is encoded with AAC-LC codec at 128 Kbps
- SINGLE_LAYER_540P_H264_AAC
- SingleLayer_540p_H264_AACProduces an MP4 file where the video is encoded with H.264 codec at a picture height of 540 pixels, and at a maximum bitrate of 2000 Kbps. Encoded video has the same average frame rate as the input. The aspect ratio of the input is preserved. If the input content has audio, then it is encoded with AAC-LC codec at 96 Kbps
- SINGLE_LAYER_720P_H264_AAC
- SingleLayer_720p_H264_AACProduces an MP4 file where the video is encoded with H.264 codec at a picture height of 720 pixels, and at a maximum bitrate of 3500 Kbps. Encoded video has the same average frame rate as the input. The aspect ratio of the input is preserved. If the input content has audio, then it is encoded with AAC-LC codec at 96 Kbps
- SINGLE_LAYER_1080P_H264_AAC
- SingleLayer_1080p_H264_AACProduces an MP4 file where the video is encoded with H.264 codec at a picture height of 1080 pixels, and at a maximum bitrate of 6000 Kbps. Encoded video has the same average frame rate as the input. The aspect ratio of the input is preserved. If the input content has audio, then it is encoded with AAC-LC codec at 128 Kbps
- SINGLE_LAYER_2160P_H264_AAC
- SingleLayer_2160p_H264_AACProduces an MP4 file where the video is encoded with H.264 codec at a picture height of 2160 pixels, and at a maximum bitrate of 16000 Kbps. Encoded video has the same average frame rate as the input. The aspect ratio of the input is preserved. If the input content has audio, then it is encoded with AAC-LC codec at 128 Kbps
- "Single
Layer_540p_H264_AAC" - SingleLayer_540p_H264_AACProduces an MP4 file where the video is encoded with H.264 codec at a picture height of 540 pixels, and at a maximum bitrate of 2000 Kbps. Encoded video has the same average frame rate as the input. The aspect ratio of the input is preserved. If the input content has audio, then it is encoded with AAC-LC codec at 96 Kbps
- "Single
Layer_720p_H264_AAC" - SingleLayer_720p_H264_AACProduces an MP4 file where the video is encoded with H.264 codec at a picture height of 720 pixels, and at a maximum bitrate of 3500 Kbps. Encoded video has the same average frame rate as the input. The aspect ratio of the input is preserved. If the input content has audio, then it is encoded with AAC-LC codec at 96 Kbps
- "Single
Layer_1080p_H264_AAC" - SingleLayer_1080p_H264_AACProduces an MP4 file where the video is encoded with H.264 codec at a picture height of 1080 pixels, and at a maximum bitrate of 6000 Kbps. Encoded video has the same average frame rate as the input. The aspect ratio of the input is preserved. If the input content has audio, then it is encoded with AAC-LC codec at 128 Kbps
- "Single
Layer_2160p_H264_AAC" - SingleLayer_2160p_H264_AACProduces an MP4 file where the video is encoded with H.264 codec at a picture height of 2160 pixels, and at a maximum bitrate of 16000 Kbps. Encoded video has the same average frame rate as the input. The aspect ratio of the input is preserved. If the input content has audio, then it is encoded with AAC-LC codec at 128 Kbps
Kind, KindArgs
- Live
- LiveLive pipeline topology resource.
- Batch
- BatchBatch pipeline topology resource.
- Kind
Live - LiveLive pipeline topology resource.
- Kind
Batch - BatchBatch pipeline topology resource.
- Live
- LiveLive pipeline topology resource.
- Batch
- BatchBatch pipeline topology resource.
- Live
- LiveLive pipeline topology resource.
- Batch
- BatchBatch pipeline topology resource.
- LIVE
- LiveLive pipeline topology resource.
- BATCH
- BatchBatch pipeline topology resource.
- "Live"
- LiveLive pipeline topology resource.
- "Batch"
- BatchBatch pipeline topology resource.
NodeInput, NodeInputArgs
- Node
Name string - The name of the upstream node in the pipeline which output is used as input of the current node.
- Node
Name string - The name of the upstream node in the pipeline which output is used as input of the current node.
- node
Name String - The name of the upstream node in the pipeline which output is used as input of the current node.
- node
Name string - The name of the upstream node in the pipeline which output is used as input of the current node.
- node_
name str - The name of the upstream node in the pipeline which output is used as input of the current node.
- node
Name String - The name of the upstream node in the pipeline which output is used as input of the current node.
NodeInputResponse, NodeInputResponseArgs
- Node
Name string - The name of the upstream node in the pipeline which output is used as input of the current node.
- Node
Name string - The name of the upstream node in the pipeline which output is used as input of the current node.
- node
Name String - The name of the upstream node in the pipeline which output is used as input of the current node.
- node
Name string - The name of the upstream node in the pipeline which output is used as input of the current node.
- node_
name str - The name of the upstream node in the pipeline which output is used as input of the current node.
- node
Name String - The name of the upstream node in the pipeline which output is used as input of the current node.
ParameterDeclaration, ParameterDeclarationArgs
- Name string
- Name of the parameter.
- Type
string | Pulumi.
Azure Native. Video Analyzer. Parameter Type - Type of the parameter.
- Default string
- The default value for the parameter to be used if the pipeline does not specify a value.
- Description string
- Description of the parameter.
- Name string
- Name of the parameter.
- Type
string | Parameter
Type - Type of the parameter.
- Default string
- The default value for the parameter to be used if the pipeline does not specify a value.
- Description string
- Description of the parameter.
- name String
- Name of the parameter.
- type
String | Parameter
Type - Type of the parameter.
- default_ String
- The default value for the parameter to be used if the pipeline does not specify a value.
- description String
- Description of the parameter.
- name string
- Name of the parameter.
- type
string | Parameter
Type - Type of the parameter.
- default string
- The default value for the parameter to be used if the pipeline does not specify a value.
- description string
- Description of the parameter.
- name str
- Name of the parameter.
- type
str | Parameter
Type - Type of the parameter.
- default str
- The default value for the parameter to be used if the pipeline does not specify a value.
- description str
- Description of the parameter.
- name String
- Name of the parameter.
- type
String | "String" | "Secret
String" | "Int" | "Double" | "Bool" - Type of the parameter.
- default String
- The default value for the parameter to be used if the pipeline does not specify a value.
- description String
- Description of the parameter.
ParameterDeclarationResponse, ParameterDeclarationResponseArgs
- Name string
- Name of the parameter.
- Type string
- Type of the parameter.
- Default string
- The default value for the parameter to be used if the pipeline does not specify a value.
- Description string
- Description of the parameter.
- Name string
- Name of the parameter.
- Type string
- Type of the parameter.
- Default string
- The default value for the parameter to be used if the pipeline does not specify a value.
- Description string
- Description of the parameter.
- name String
- Name of the parameter.
- type String
- Type of the parameter.
- default_ String
- The default value for the parameter to be used if the pipeline does not specify a value.
- description String
- Description of the parameter.
- name string
- Name of the parameter.
- type string
- Type of the parameter.
- default string
- The default value for the parameter to be used if the pipeline does not specify a value.
- description string
- Description of the parameter.
- name str
- Name of the parameter.
- type str
- Type of the parameter.
- default str
- The default value for the parameter to be used if the pipeline does not specify a value.
- description str
- Description of the parameter.
- name String
- Name of the parameter.
- type String
- Type of the parameter.
- default String
- The default value for the parameter to be used if the pipeline does not specify a value.
- description String
- Description of the parameter.
ParameterType, ParameterTypeArgs
- String
- StringThe parameter's value is a string.
- Secret
String - SecretStringThe parameter's value is a string that holds sensitive information.
- Int
- IntThe parameter's value is a 32-bit signed integer.
- Double
- DoubleThe parameter's value is a 64-bit double-precision floating point.
- Bool
- BoolThe parameter's value is a boolean value that is either true or false.
- Parameter
Type String - StringThe parameter's value is a string.
- Parameter
Type Secret String - SecretStringThe parameter's value is a string that holds sensitive information.
- Parameter
Type Int - IntThe parameter's value is a 32-bit signed integer.
- Parameter
Type Double - DoubleThe parameter's value is a 64-bit double-precision floating point.
- Parameter
Type Bool - BoolThe parameter's value is a boolean value that is either true or false.
- String
- StringThe parameter's value is a string.
- Secret
String - SecretStringThe parameter's value is a string that holds sensitive information.
- Int
- IntThe parameter's value is a 32-bit signed integer.
- Double
- DoubleThe parameter's value is a 64-bit double-precision floating point.
- Bool
- BoolThe parameter's value is a boolean value that is either true or false.
- String
- StringThe parameter's value is a string.
- Secret
String - SecretStringThe parameter's value is a string that holds sensitive information.
- Int
- IntThe parameter's value is a 32-bit signed integer.
- Double
- DoubleThe parameter's value is a 64-bit double-precision floating point.
- Bool
- BoolThe parameter's value is a boolean value that is either true or false.
- STRING
- StringThe parameter's value is a string.
- SECRET_STRING
- SecretStringThe parameter's value is a string that holds sensitive information.
- INT
- IntThe parameter's value is a 32-bit signed integer.
- DOUBLE
- DoubleThe parameter's value is a 64-bit double-precision floating point.
- BOOL
- BoolThe parameter's value is a boolean value that is either true or false.
- "String"
- StringThe parameter's value is a string.
- "Secret
String" - SecretStringThe parameter's value is a string that holds sensitive information.
- "Int"
- IntThe parameter's value is a 32-bit signed integer.
- "Double"
- DoubleThe parameter's value is a 64-bit double-precision floating point.
- "Bool"
- BoolThe parameter's value is a boolean value that is either true or false.
PemCertificateList, PemCertificateListArgs
- Certificates List<string>
- PEM formatted public certificates. One certificate per entry.
- Certificates []string
- PEM formatted public certificates. One certificate per entry.
- certificates List<String>
- PEM formatted public certificates. One certificate per entry.
- certificates string[]
- PEM formatted public certificates. One certificate per entry.
- certificates Sequence[str]
- PEM formatted public certificates. One certificate per entry.
- certificates List<String>
- PEM formatted public certificates. One certificate per entry.
PemCertificateListResponse, PemCertificateListResponseArgs
- Certificates List<string>
- PEM formatted public certificates. One certificate per entry.
- Certificates []string
- PEM formatted public certificates. One certificate per entry.
- certificates List<String>
- PEM formatted public certificates. One certificate per entry.
- certificates string[]
- PEM formatted public certificates. One certificate per entry.
- certificates Sequence[str]
- PEM formatted public certificates. One certificate per entry.
- certificates List<String>
- PEM formatted public certificates. One certificate per entry.
RtspSource, RtspSourceArgs
- Endpoint
Pulumi.
Azure | Pulumi.Native. Video Analyzer. Inputs. Tls Endpoint Azure Native. Video Analyzer. Inputs. Unsecured Endpoint - RTSP endpoint information for Video Analyzer to connect to. This contains the required information for Video Analyzer to connect to RTSP cameras and/or generic RTSP servers.
- Name string
- Node name. Must be unique within the topology.
- Transport
string | Pulumi.
Azure Native. Video Analyzer. Rtsp Transport - Network transport utilized by the RTSP and RTP exchange: TCP or HTTP. When using TCP, the RTP packets are interleaved on the TCP RTSP connection. When using HTTP, the RTSP messages are exchanged through long lived HTTP connections, and the RTP packages are interleaved in the HTTP connections alongside the RTSP messages.
- Endpoint
Tls
Endpoint | UnsecuredEndpoint - RTSP endpoint information for Video Analyzer to connect to. This contains the required information for Video Analyzer to connect to RTSP cameras and/or generic RTSP servers.
- Name string
- Node name. Must be unique within the topology.
- Transport
string | Rtsp
Transport - Network transport utilized by the RTSP and RTP exchange: TCP or HTTP. When using TCP, the RTP packets are interleaved on the TCP RTSP connection. When using HTTP, the RTSP messages are exchanged through long lived HTTP connections, and the RTP packages are interleaved in the HTTP connections alongside the RTSP messages.
- endpoint
Tls
Endpoint | UnsecuredEndpoint - RTSP endpoint information for Video Analyzer to connect to. This contains the required information for Video Analyzer to connect to RTSP cameras and/or generic RTSP servers.
- name String
- Node name. Must be unique within the topology.
- transport
String | Rtsp
Transport - Network transport utilized by the RTSP and RTP exchange: TCP or HTTP. When using TCP, the RTP packets are interleaved on the TCP RTSP connection. When using HTTP, the RTSP messages are exchanged through long lived HTTP connections, and the RTP packages are interleaved in the HTTP connections alongside the RTSP messages.
- endpoint
Tls
Endpoint | UnsecuredEndpoint - RTSP endpoint information for Video Analyzer to connect to. This contains the required information for Video Analyzer to connect to RTSP cameras and/or generic RTSP servers.
- name string
- Node name. Must be unique within the topology.
- transport
string | Rtsp
Transport - Network transport utilized by the RTSP and RTP exchange: TCP or HTTP. When using TCP, the RTP packets are interleaved on the TCP RTSP connection. When using HTTP, the RTSP messages are exchanged through long lived HTTP connections, and the RTP packages are interleaved in the HTTP connections alongside the RTSP messages.
- endpoint
Tls
Endpoint | UnsecuredEndpoint - RTSP endpoint information for Video Analyzer to connect to. This contains the required information for Video Analyzer to connect to RTSP cameras and/or generic RTSP servers.
- name str
- Node name. Must be unique within the topology.
- transport
str | Rtsp
Transport - Network transport utilized by the RTSP and RTP exchange: TCP or HTTP. When using TCP, the RTP packets are interleaved on the TCP RTSP connection. When using HTTP, the RTSP messages are exchanged through long lived HTTP connections, and the RTP packages are interleaved in the HTTP connections alongside the RTSP messages.
- endpoint Property Map | Property Map
- RTSP endpoint information for Video Analyzer to connect to. This contains the required information for Video Analyzer to connect to RTSP cameras and/or generic RTSP servers.
- name String
- Node name. Must be unique within the topology.
- transport String | "Http" | "Tcp"
- Network transport utilized by the RTSP and RTP exchange: TCP or HTTP. When using TCP, the RTP packets are interleaved on the TCP RTSP connection. When using HTTP, the RTSP messages are exchanged through long lived HTTP connections, and the RTP packages are interleaved in the HTTP connections alongside the RTSP messages.
RtspSourceResponse, RtspSourceResponseArgs
- Endpoint
Pulumi.
Azure | Pulumi.Native. Video Analyzer. Inputs. Tls Endpoint Response Azure Native. Video Analyzer. Inputs. Unsecured Endpoint Response - RTSP endpoint information for Video Analyzer to connect to. This contains the required information for Video Analyzer to connect to RTSP cameras and/or generic RTSP servers.
- Name string
- Node name. Must be unique within the topology.
- Transport string
- Network transport utilized by the RTSP and RTP exchange: TCP or HTTP. When using TCP, the RTP packets are interleaved on the TCP RTSP connection. When using HTTP, the RTSP messages are exchanged through long lived HTTP connections, and the RTP packages are interleaved in the HTTP connections alongside the RTSP messages.
- Endpoint
Tls
Endpoint | UnsecuredResponse Endpoint Response - RTSP endpoint information for Video Analyzer to connect to. This contains the required information for Video Analyzer to connect to RTSP cameras and/or generic RTSP servers.
- Name string
- Node name. Must be unique within the topology.
- Transport string
- Network transport utilized by the RTSP and RTP exchange: TCP or HTTP. When using TCP, the RTP packets are interleaved on the TCP RTSP connection. When using HTTP, the RTSP messages are exchanged through long lived HTTP connections, and the RTP packages are interleaved in the HTTP connections alongside the RTSP messages.
- endpoint
Tls
Endpoint | UnsecuredResponse Endpoint Response - RTSP endpoint information for Video Analyzer to connect to. This contains the required information for Video Analyzer to connect to RTSP cameras and/or generic RTSP servers.
- name String
- Node name. Must be unique within the topology.
- transport String
- Network transport utilized by the RTSP and RTP exchange: TCP or HTTP. When using TCP, the RTP packets are interleaved on the TCP RTSP connection. When using HTTP, the RTSP messages are exchanged through long lived HTTP connections, and the RTP packages are interleaved in the HTTP connections alongside the RTSP messages.
- endpoint
Tls
Endpoint | UnsecuredResponse Endpoint Response - RTSP endpoint information for Video Analyzer to connect to. This contains the required information for Video Analyzer to connect to RTSP cameras and/or generic RTSP servers.
- name string
- Node name. Must be unique within the topology.
- transport string
- Network transport utilized by the RTSP and RTP exchange: TCP or HTTP. When using TCP, the RTP packets are interleaved on the TCP RTSP connection. When using HTTP, the RTSP messages are exchanged through long lived HTTP connections, and the RTP packages are interleaved in the HTTP connections alongside the RTSP messages.
- endpoint
Tls
Endpoint | UnsecuredResponse Endpoint Response - RTSP endpoint information for Video Analyzer to connect to. This contains the required information for Video Analyzer to connect to RTSP cameras and/or generic RTSP servers.
- name str
- Node name. Must be unique within the topology.
- transport str
- Network transport utilized by the RTSP and RTP exchange: TCP or HTTP. When using TCP, the RTP packets are interleaved on the TCP RTSP connection. When using HTTP, the RTSP messages are exchanged through long lived HTTP connections, and the RTP packages are interleaved in the HTTP connections alongside the RTSP messages.
- endpoint Property Map | Property Map
- RTSP endpoint information for Video Analyzer to connect to. This contains the required information for Video Analyzer to connect to RTSP cameras and/or generic RTSP servers.
- name String
- Node name. Must be unique within the topology.
- transport String
- Network transport utilized by the RTSP and RTP exchange: TCP or HTTP. When using TCP, the RTP packets are interleaved on the TCP RTSP connection. When using HTTP, the RTSP messages are exchanged through long lived HTTP connections, and the RTP packages are interleaved in the HTTP connections alongside the RTSP messages.
RtspTransport, RtspTransportArgs
- Http
- HttpHTTP transport. RTSP messages are exchanged over long running HTTP requests and RTP packets are interleaved within the HTTP channel.
- Tcp
- TcpTCP transport. RTSP is used directly over TCP and RTP packets are interleaved within the TCP channel.
- Rtsp
Transport Http - HttpHTTP transport. RTSP messages are exchanged over long running HTTP requests and RTP packets are interleaved within the HTTP channel.
- Rtsp
Transport Tcp - TcpTCP transport. RTSP is used directly over TCP and RTP packets are interleaved within the TCP channel.
- Http
- HttpHTTP transport. RTSP messages are exchanged over long running HTTP requests and RTP packets are interleaved within the HTTP channel.
- Tcp
- TcpTCP transport. RTSP is used directly over TCP and RTP packets are interleaved within the TCP channel.
- Http
- HttpHTTP transport. RTSP messages are exchanged over long running HTTP requests and RTP packets are interleaved within the HTTP channel.
- Tcp
- TcpTCP transport. RTSP is used directly over TCP and RTP packets are interleaved within the TCP channel.
- HTTP
- HttpHTTP transport. RTSP messages are exchanged over long running HTTP requests and RTP packets are interleaved within the HTTP channel.
- TCP
- TcpTCP transport. RTSP is used directly over TCP and RTP packets are interleaved within the TCP channel.
- "Http"
- HttpHTTP transport. RTSP messages are exchanged over long running HTTP requests and RTP packets are interleaved within the HTTP channel.
- "Tcp"
- TcpTCP transport. RTSP is used directly over TCP and RTP packets are interleaved within the TCP channel.
SecureIotDeviceRemoteTunnel, SecureIotDeviceRemoteTunnelArgs
- Device
Id string - The IoT device id to use when establishing the remote tunnel. This string is case-sensitive.
- Iot
Hub stringName - Name of the IoT Hub.
- Device
Id string - The IoT device id to use when establishing the remote tunnel. This string is case-sensitive.
- Iot
Hub stringName - Name of the IoT Hub.
- device
Id String - The IoT device id to use when establishing the remote tunnel. This string is case-sensitive.
- iot
Hub StringName - Name of the IoT Hub.
- device
Id string - The IoT device id to use when establishing the remote tunnel. This string is case-sensitive.
- iot
Hub stringName - Name of the IoT Hub.
- device_
id str - The IoT device id to use when establishing the remote tunnel. This string is case-sensitive.
- iot_
hub_ strname - Name of the IoT Hub.
- device
Id String - The IoT device id to use when establishing the remote tunnel. This string is case-sensitive.
- iot
Hub StringName - Name of the IoT Hub.
SecureIotDeviceRemoteTunnelResponse, SecureIotDeviceRemoteTunnelResponseArgs
- Device
Id string - The IoT device id to use when establishing the remote tunnel. This string is case-sensitive.
- Iot
Hub stringName - Name of the IoT Hub.
- Device
Id string - The IoT device id to use when establishing the remote tunnel. This string is case-sensitive.
- Iot
Hub stringName - Name of the IoT Hub.
- device
Id String - The IoT device id to use when establishing the remote tunnel. This string is case-sensitive.
- iot
Hub StringName - Name of the IoT Hub.
- device
Id string - The IoT device id to use when establishing the remote tunnel. This string is case-sensitive.
- iot
Hub stringName - Name of the IoT Hub.
- device_
id str - The IoT device id to use when establishing the remote tunnel. This string is case-sensitive.
- iot_
hub_ strname - Name of the IoT Hub.
- device
Id String - The IoT device id to use when establishing the remote tunnel. This string is case-sensitive.
- iot
Hub StringName - Name of the IoT Hub.
Sku, SkuArgs
- Name
string | Pulumi.
Azure Native. Video Analyzer. Sku Name - The SKU name.
- name String | "Live_S1" | "Batch_S1"
- The SKU name.
SkuName, SkuNameArgs
- Live_S1
- Live_S1Represents the Live S1 SKU name. Using this SKU you can create live pipelines to capture, record, and stream live video from RTSP-capable cameras at bitrate settings from 0.5 Kbps to 3000 Kbps.
- Batch_S1
- Batch_S1Represents the Batch S1 SKU name. Using this SKU you can create pipeline jobs to process recorded content.
- Sku
Name_Live_S1 - Live_S1Represents the Live S1 SKU name. Using this SKU you can create live pipelines to capture, record, and stream live video from RTSP-capable cameras at bitrate settings from 0.5 Kbps to 3000 Kbps.
- Sku
Name_Batch_S1 - Batch_S1Represents the Batch S1 SKU name. Using this SKU you can create pipeline jobs to process recorded content.
- Live_S1
- Live_S1Represents the Live S1 SKU name. Using this SKU you can create live pipelines to capture, record, and stream live video from RTSP-capable cameras at bitrate settings from 0.5 Kbps to 3000 Kbps.
- Batch_S1
- Batch_S1Represents the Batch S1 SKU name. Using this SKU you can create pipeline jobs to process recorded content.
- Live_S1
- Live_S1Represents the Live S1 SKU name. Using this SKU you can create live pipelines to capture, record, and stream live video from RTSP-capable cameras at bitrate settings from 0.5 Kbps to 3000 Kbps.
- Batch_S1
- Batch_S1Represents the Batch S1 SKU name. Using this SKU you can create pipeline jobs to process recorded content.
- LIVE_S1
- Live_S1Represents the Live S1 SKU name. Using this SKU you can create live pipelines to capture, record, and stream live video from RTSP-capable cameras at bitrate settings from 0.5 Kbps to 3000 Kbps.
- BATCH_S1
- Batch_S1Represents the Batch S1 SKU name. Using this SKU you can create pipeline jobs to process recorded content.
- "Live_S1"
- Live_S1Represents the Live S1 SKU name. Using this SKU you can create live pipelines to capture, record, and stream live video from RTSP-capable cameras at bitrate settings from 0.5 Kbps to 3000 Kbps.
- "Batch_S1"
- Batch_S1Represents the Batch S1 SKU name. Using this SKU you can create pipeline jobs to process recorded content.
SkuResponse, SkuResponseArgs
SystemDataResponse, SystemDataResponseArgs
- Created
At string - The timestamp of resource creation (UTC).
- Created
By string - The identity that created the resource.
- Created
By stringType - The type of identity that created the resource.
- Last
Modified stringAt - The timestamp of resource last modification (UTC)
- Last
Modified stringBy - The identity that last modified the resource.
- Last
Modified stringBy Type - The type of identity that last modified the resource.
- Created
At string - The timestamp of resource creation (UTC).
- Created
By string - The identity that created the resource.
- Created
By stringType - The type of identity that created the resource.
- Last
Modified stringAt - The timestamp of resource last modification (UTC)
- Last
Modified stringBy - The identity that last modified the resource.
- Last
Modified stringBy Type - The type of identity that last modified the resource.
- created
At String - The timestamp of resource creation (UTC).
- created
By String - The identity that created the resource.
- created
By StringType - The type of identity that created the resource.
- last
Modified StringAt - The timestamp of resource last modification (UTC)
- last
Modified StringBy - The identity that last modified the resource.
- last
Modified StringBy Type - The type of identity that last modified the resource.
- created
At string - The timestamp of resource creation (UTC).
- created
By string - The identity that created the resource.
- created
By stringType - The type of identity that created the resource.
- last
Modified stringAt - The timestamp of resource last modification (UTC)
- last
Modified stringBy - The identity that last modified the resource.
- last
Modified stringBy Type - The type of identity that last modified the resource.
- created_
at str - The timestamp of resource creation (UTC).
- created_
by str - The identity that created the resource.
- created_
by_ strtype - The type of identity that created the resource.
- last_
modified_ strat - The timestamp of resource last modification (UTC)
- last_
modified_ strby - The identity that last modified the resource.
- last_
modified_ strby_ type - The type of identity that last modified the resource.
- created
At String - The timestamp of resource creation (UTC).
- created
By String - The identity that created the resource.
- created
By StringType - The type of identity that created the resource.
- last
Modified StringAt - The timestamp of resource last modification (UTC)
- last
Modified StringBy - The identity that last modified the resource.
- last
Modified StringBy Type - The type of identity that last modified the resource.
TlsEndpoint, TlsEndpointArgs
- Credentials
Pulumi.
Azure Native. Video Analyzer. Inputs. Username Password Credentials - Credentials to be presented to the endpoint.
- Url string
- The endpoint URL for Video Analyzer to connect to.
- Trusted
Certificates Pulumi.Azure Native. Video Analyzer. Inputs. Pem Certificate List - List of trusted certificate authorities when authenticating a TLS connection. A null list designates that Azure Video Analyzer's list of trusted authorities should be used.
- Tunnel
Pulumi.
Azure Native. Video Analyzer. Inputs. Secure Iot Device Remote Tunnel - Describes the tunnel through which Video Analyzer can connect to the endpoint URL. This is an optional property, typically used when the endpoint is behind a firewall.
- Validation
Options Pulumi.Azure Native. Video Analyzer. Inputs. Tls Validation Options - Validation options to use when authenticating a TLS connection. By default, strict validation is used.
- Credentials
Username
Password Credentials - Credentials to be presented to the endpoint.
- Url string
- The endpoint URL for Video Analyzer to connect to.
- Trusted
Certificates PemCertificate List - List of trusted certificate authorities when authenticating a TLS connection. A null list designates that Azure Video Analyzer's list of trusted authorities should be used.
- Tunnel
Secure
Iot Device Remote Tunnel - Describes the tunnel through which Video Analyzer can connect to the endpoint URL. This is an optional property, typically used when the endpoint is behind a firewall.
- Validation
Options TlsValidation Options - Validation options to use when authenticating a TLS connection. By default, strict validation is used.
- credentials
Username
Password Credentials - Credentials to be presented to the endpoint.
- url String
- The endpoint URL for Video Analyzer to connect to.
- trusted
Certificates PemCertificate List - List of trusted certificate authorities when authenticating a TLS connection. A null list designates that Azure Video Analyzer's list of trusted authorities should be used.
- tunnel
Secure
Iot Device Remote Tunnel - Describes the tunnel through which Video Analyzer can connect to the endpoint URL. This is an optional property, typically used when the endpoint is behind a firewall.
- validation
Options TlsValidation Options - Validation options to use when authenticating a TLS connection. By default, strict validation is used.
- credentials
Username
Password Credentials - Credentials to be presented to the endpoint.
- url string
- The endpoint URL for Video Analyzer to connect to.
- trusted
Certificates PemCertificate List - List of trusted certificate authorities when authenticating a TLS connection. A null list designates that Azure Video Analyzer's list of trusted authorities should be used.
- tunnel
Secure
Iot Device Remote Tunnel - Describes the tunnel through which Video Analyzer can connect to the endpoint URL. This is an optional property, typically used when the endpoint is behind a firewall.
- validation
Options TlsValidation Options - Validation options to use when authenticating a TLS connection. By default, strict validation is used.
- credentials
Username
Password Credentials - Credentials to be presented to the endpoint.
- url str
- The endpoint URL for Video Analyzer to connect to.
- trusted_
certificates PemCertificate List - List of trusted certificate authorities when authenticating a TLS connection. A null list designates that Azure Video Analyzer's list of trusted authorities should be used.
- tunnel
Secure
Iot Device Remote Tunnel - Describes the tunnel through which Video Analyzer can connect to the endpoint URL. This is an optional property, typically used when the endpoint is behind a firewall.
- validation_
options TlsValidation Options - Validation options to use when authenticating a TLS connection. By default, strict validation is used.
- credentials Property Map
- Credentials to be presented to the endpoint.
- url String
- The endpoint URL for Video Analyzer to connect to.
- trusted
Certificates Property Map - List of trusted certificate authorities when authenticating a TLS connection. A null list designates that Azure Video Analyzer's list of trusted authorities should be used.
- tunnel Property Map
- Describes the tunnel through which Video Analyzer can connect to the endpoint URL. This is an optional property, typically used when the endpoint is behind a firewall.
- validation
Options Property Map - Validation options to use when authenticating a TLS connection. By default, strict validation is used.
TlsEndpointResponse, TlsEndpointResponseArgs
- Credentials
Pulumi.
Azure Native. Video Analyzer. Inputs. Username Password Credentials Response - Credentials to be presented to the endpoint.
- Url string
- The endpoint URL for Video Analyzer to connect to.
- Trusted
Certificates Pulumi.Azure Native. Video Analyzer. Inputs. Pem Certificate List Response - List of trusted certificate authorities when authenticating a TLS connection. A null list designates that Azure Video Analyzer's list of trusted authorities should be used.
- Tunnel
Pulumi.
Azure Native. Video Analyzer. Inputs. Secure Iot Device Remote Tunnel Response - Describes the tunnel through which Video Analyzer can connect to the endpoint URL. This is an optional property, typically used when the endpoint is behind a firewall.
- Validation
Options Pulumi.Azure Native. Video Analyzer. Inputs. Tls Validation Options Response - Validation options to use when authenticating a TLS connection. By default, strict validation is used.
- Credentials
Username
Password Credentials Response - Credentials to be presented to the endpoint.
- Url string
- The endpoint URL for Video Analyzer to connect to.
- Trusted
Certificates PemCertificate List Response - List of trusted certificate authorities when authenticating a TLS connection. A null list designates that Azure Video Analyzer's list of trusted authorities should be used.
- Tunnel
Secure
Iot Device Remote Tunnel Response - Describes the tunnel through which Video Analyzer can connect to the endpoint URL. This is an optional property, typically used when the endpoint is behind a firewall.
- Validation
Options TlsValidation Options Response - Validation options to use when authenticating a TLS connection. By default, strict validation is used.
- credentials
Username
Password Credentials Response - Credentials to be presented to the endpoint.
- url String
- The endpoint URL for Video Analyzer to connect to.
- trusted
Certificates PemCertificate List Response - List of trusted certificate authorities when authenticating a TLS connection. A null list designates that Azure Video Analyzer's list of trusted authorities should be used.
- tunnel
Secure
Iot Device Remote Tunnel Response - Describes the tunnel through which Video Analyzer can connect to the endpoint URL. This is an optional property, typically used when the endpoint is behind a firewall.
- validation
Options TlsValidation Options Response - Validation options to use when authenticating a TLS connection. By default, strict validation is used.
- credentials
Username
Password Credentials Response - Credentials to be presented to the endpoint.
- url string
- The endpoint URL for Video Analyzer to connect to.
- trusted
Certificates PemCertificate List Response - List of trusted certificate authorities when authenticating a TLS connection. A null list designates that Azure Video Analyzer's list of trusted authorities should be used.
- tunnel
Secure
Iot Device Remote Tunnel Response - Describes the tunnel through which Video Analyzer can connect to the endpoint URL. This is an optional property, typically used when the endpoint is behind a firewall.
- validation
Options TlsValidation Options Response - Validation options to use when authenticating a TLS connection. By default, strict validation is used.
- credentials
Username
Password Credentials Response - Credentials to be presented to the endpoint.
- url str
- The endpoint URL for Video Analyzer to connect to.
- trusted_
certificates PemCertificate List Response - List of trusted certificate authorities when authenticating a TLS connection. A null list designates that Azure Video Analyzer's list of trusted authorities should be used.
- tunnel
Secure
Iot Device Remote Tunnel Response - Describes the tunnel through which Video Analyzer can connect to the endpoint URL. This is an optional property, typically used when the endpoint is behind a firewall.
- validation_
options TlsValidation Options Response - Validation options to use when authenticating a TLS connection. By default, strict validation is used.
- credentials Property Map
- Credentials to be presented to the endpoint.
- url String
- The endpoint URL for Video Analyzer to connect to.
- trusted
Certificates Property Map - List of trusted certificate authorities when authenticating a TLS connection. A null list designates that Azure Video Analyzer's list of trusted authorities should be used.
- tunnel Property Map
- Describes the tunnel through which Video Analyzer can connect to the endpoint URL. This is an optional property, typically used when the endpoint is behind a firewall.
- validation
Options Property Map - Validation options to use when authenticating a TLS connection. By default, strict validation is used.
TlsValidationOptions, TlsValidationOptionsArgs
- Ignore
Hostname string - When set to 'true' causes the certificate subject name validation to be skipped. Default is 'false'.
- Ignore
Signature string - When set to 'true' causes the certificate chain trust validation to be skipped. Default is 'false'.
- Ignore
Hostname string - When set to 'true' causes the certificate subject name validation to be skipped. Default is 'false'.
- Ignore
Signature string - When set to 'true' causes the certificate chain trust validation to be skipped. Default is 'false'.
- ignore
Hostname String - When set to 'true' causes the certificate subject name validation to be skipped. Default is 'false'.
- ignore
Signature String - When set to 'true' causes the certificate chain trust validation to be skipped. Default is 'false'.
- ignore
Hostname string - When set to 'true' causes the certificate subject name validation to be skipped. Default is 'false'.
- ignore
Signature string - When set to 'true' causes the certificate chain trust validation to be skipped. Default is 'false'.
- ignore_
hostname str - When set to 'true' causes the certificate subject name validation to be skipped. Default is 'false'.
- ignore_
signature str - When set to 'true' causes the certificate chain trust validation to be skipped. Default is 'false'.
- ignore
Hostname String - When set to 'true' causes the certificate subject name validation to be skipped. Default is 'false'.
- ignore
Signature String - When set to 'true' causes the certificate chain trust validation to be skipped. Default is 'false'.
TlsValidationOptionsResponse, TlsValidationOptionsResponseArgs
- Ignore
Hostname string - When set to 'true' causes the certificate subject name validation to be skipped. Default is 'false'.
- Ignore
Signature string - When set to 'true' causes the certificate chain trust validation to be skipped. Default is 'false'.
- Ignore
Hostname string - When set to 'true' causes the certificate subject name validation to be skipped. Default is 'false'.
- Ignore
Signature string - When set to 'true' causes the certificate chain trust validation to be skipped. Default is 'false'.
- ignore
Hostname String - When set to 'true' causes the certificate subject name validation to be skipped. Default is 'false'.
- ignore
Signature String - When set to 'true' causes the certificate chain trust validation to be skipped. Default is 'false'.
- ignore
Hostname string - When set to 'true' causes the certificate subject name validation to be skipped. Default is 'false'.
- ignore
Signature string - When set to 'true' causes the certificate chain trust validation to be skipped. Default is 'false'.
- ignore_
hostname str - When set to 'true' causes the certificate subject name validation to be skipped. Default is 'false'.
- ignore_
signature str - When set to 'true' causes the certificate chain trust validation to be skipped. Default is 'false'.
- ignore
Hostname String - When set to 'true' causes the certificate subject name validation to be skipped. Default is 'false'.
- ignore
Signature String - When set to 'true' causes the certificate chain trust validation to be skipped. Default is 'false'.
UnsecuredEndpoint, UnsecuredEndpointArgs
- Credentials
Pulumi.
Azure Native. Video Analyzer. Inputs. Username Password Credentials - Credentials to be presented to the endpoint.
- Url string
- The endpoint URL for Video Analyzer to connect to.
- Tunnel
Pulumi.
Azure Native. Video Analyzer. Inputs. Secure Iot Device Remote Tunnel - Describes the tunnel through which Video Analyzer can connect to the endpoint URL. This is an optional property, typically used when the endpoint is behind a firewall.
- Credentials
Username
Password Credentials - Credentials to be presented to the endpoint.
- Url string
- The endpoint URL for Video Analyzer to connect to.
- Tunnel
Secure
Iot Device Remote Tunnel - Describes the tunnel through which Video Analyzer can connect to the endpoint URL. This is an optional property, typically used when the endpoint is behind a firewall.
- credentials
Username
Password Credentials - Credentials to be presented to the endpoint.
- url String
- The endpoint URL for Video Analyzer to connect to.
- tunnel
Secure
Iot Device Remote Tunnel - Describes the tunnel through which Video Analyzer can connect to the endpoint URL. This is an optional property, typically used when the endpoint is behind a firewall.
- credentials
Username
Password Credentials - Credentials to be presented to the endpoint.
- url string
- The endpoint URL for Video Analyzer to connect to.
- tunnel
Secure
Iot Device Remote Tunnel - Describes the tunnel through which Video Analyzer can connect to the endpoint URL. This is an optional property, typically used when the endpoint is behind a firewall.
- credentials
Username
Password Credentials - Credentials to be presented to the endpoint.
- url str
- The endpoint URL for Video Analyzer to connect to.
- tunnel
Secure
Iot Device Remote Tunnel - Describes the tunnel through which Video Analyzer can connect to the endpoint URL. This is an optional property, typically used when the endpoint is behind a firewall.
- credentials Property Map
- Credentials to be presented to the endpoint.
- url String
- The endpoint URL for Video Analyzer to connect to.
- tunnel Property Map
- Describes the tunnel through which Video Analyzer can connect to the endpoint URL. This is an optional property, typically used when the endpoint is behind a firewall.
UnsecuredEndpointResponse, UnsecuredEndpointResponseArgs
- Credentials
Pulumi.
Azure Native. Video Analyzer. Inputs. Username Password Credentials Response - Credentials to be presented to the endpoint.
- Url string
- The endpoint URL for Video Analyzer to connect to.
- Tunnel
Pulumi.
Azure Native. Video Analyzer. Inputs. Secure Iot Device Remote Tunnel Response - Describes the tunnel through which Video Analyzer can connect to the endpoint URL. This is an optional property, typically used when the endpoint is behind a firewall.
- Credentials
Username
Password Credentials Response - Credentials to be presented to the endpoint.
- Url string
- The endpoint URL for Video Analyzer to connect to.
- Tunnel
Secure
Iot Device Remote Tunnel Response - Describes the tunnel through which Video Analyzer can connect to the endpoint URL. This is an optional property, typically used when the endpoint is behind a firewall.
- credentials
Username
Password Credentials Response - Credentials to be presented to the endpoint.
- url String
- The endpoint URL for Video Analyzer to connect to.
- tunnel
Secure
Iot Device Remote Tunnel Response - Describes the tunnel through which Video Analyzer can connect to the endpoint URL. This is an optional property, typically used when the endpoint is behind a firewall.
- credentials
Username
Password Credentials Response - Credentials to be presented to the endpoint.
- url string
- The endpoint URL for Video Analyzer to connect to.
- tunnel
Secure
Iot Device Remote Tunnel Response - Describes the tunnel through which Video Analyzer can connect to the endpoint URL. This is an optional property, typically used when the endpoint is behind a firewall.
- credentials
Username
Password Credentials Response - Credentials to be presented to the endpoint.
- url str
- The endpoint URL for Video Analyzer to connect to.
- tunnel
Secure
Iot Device Remote Tunnel Response - Describes the tunnel through which Video Analyzer can connect to the endpoint URL. This is an optional property, typically used when the endpoint is behind a firewall.
- credentials Property Map
- Credentials to be presented to the endpoint.
- url String
- The endpoint URL for Video Analyzer to connect to.
- tunnel Property Map
- Describes the tunnel through which Video Analyzer can connect to the endpoint URL. This is an optional property, typically used when the endpoint is behind a firewall.
UsernamePasswordCredentials, UsernamePasswordCredentialsArgs
UsernamePasswordCredentialsResponse, UsernamePasswordCredentialsResponseArgs
VideoCreationProperties, VideoCreationPropertiesArgs
- Description string
- Optional description provided by the user. Value can be up to 2048 characters long.
- Retention
Period string - Video retention period indicates how long the video is kept in storage. Value must be specified in ISO8601 duration format (i.e. "P1D" equals 1 day) and can vary between 1 day to 10 years, in 1 day increments. When absent (null), all video content is retained indefinitely. This property is only allowed for topologies where "kind" is set to "live".
- Segment
Length string - Segment length indicates the length of individual content files (segments) which are persisted to storage. Smaller segments provide lower archive playback latency but generate larger volume of storage transactions. Larger segments reduce the amount of storage transactions while increasing the archive playback latency. Value must be specified in ISO8601 duration format (i.e. "PT30S" equals 30 seconds) and can vary between 30 seconds to 5 minutes, in 30 seconds increments. Changing this value after the initial call to create the video resource can lead to errors when uploading content to the archive. Default value is 30 seconds. This property is only allowed for topologies where "kind" is set to "live".
- Title string
- Optional title provided by the user. Value can be up to 256 characters long.
- Description string
- Optional description provided by the user. Value can be up to 2048 characters long.
- Retention
Period string - Video retention period indicates how long the video is kept in storage. Value must be specified in ISO8601 duration format (i.e. "P1D" equals 1 day) and can vary between 1 day to 10 years, in 1 day increments. When absent (null), all video content is retained indefinitely. This property is only allowed for topologies where "kind" is set to "live".
- Segment
Length string - Segment length indicates the length of individual content files (segments) which are persisted to storage. Smaller segments provide lower archive playback latency but generate larger volume of storage transactions. Larger segments reduce the amount of storage transactions while increasing the archive playback latency. Value must be specified in ISO8601 duration format (i.e. "PT30S" equals 30 seconds) and can vary between 30 seconds to 5 minutes, in 30 seconds increments. Changing this value after the initial call to create the video resource can lead to errors when uploading content to the archive. Default value is 30 seconds. This property is only allowed for topologies where "kind" is set to "live".
- Title string
- Optional title provided by the user. Value can be up to 256 characters long.
- description String
- Optional description provided by the user. Value can be up to 2048 characters long.
- retention
Period String - Video retention period indicates how long the video is kept in storage. Value must be specified in ISO8601 duration format (i.e. "P1D" equals 1 day) and can vary between 1 day to 10 years, in 1 day increments. When absent (null), all video content is retained indefinitely. This property is only allowed for topologies where "kind" is set to "live".
- segment
Length String - Segment length indicates the length of individual content files (segments) which are persisted to storage. Smaller segments provide lower archive playback latency but generate larger volume of storage transactions. Larger segments reduce the amount of storage transactions while increasing the archive playback latency. Value must be specified in ISO8601 duration format (i.e. "PT30S" equals 30 seconds) and can vary between 30 seconds to 5 minutes, in 30 seconds increments. Changing this value after the initial call to create the video resource can lead to errors when uploading content to the archive. Default value is 30 seconds. This property is only allowed for topologies where "kind" is set to "live".
- title String
- Optional title provided by the user. Value can be up to 256 characters long.
- description string
- Optional description provided by the user. Value can be up to 2048 characters long.
- retention
Period string - Video retention period indicates how long the video is kept in storage. Value must be specified in ISO8601 duration format (i.e. "P1D" equals 1 day) and can vary between 1 day to 10 years, in 1 day increments. When absent (null), all video content is retained indefinitely. This property is only allowed for topologies where "kind" is set to "live".
- segment
Length string - Segment length indicates the length of individual content files (segments) which are persisted to storage. Smaller segments provide lower archive playback latency but generate larger volume of storage transactions. Larger segments reduce the amount of storage transactions while increasing the archive playback latency. Value must be specified in ISO8601 duration format (i.e. "PT30S" equals 30 seconds) and can vary between 30 seconds to 5 minutes, in 30 seconds increments. Changing this value after the initial call to create the video resource can lead to errors when uploading content to the archive. Default value is 30 seconds. This property is only allowed for topologies where "kind" is set to "live".
- title string
- Optional title provided by the user. Value can be up to 256 characters long.
- description str
- Optional description provided by the user. Value can be up to 2048 characters long.
- retention_
period str - Video retention period indicates how long the video is kept in storage. Value must be specified in ISO8601 duration format (i.e. "P1D" equals 1 day) and can vary between 1 day to 10 years, in 1 day increments. When absent (null), all video content is retained indefinitely. This property is only allowed for topologies where "kind" is set to "live".
- segment_
length str - Segment length indicates the length of individual content files (segments) which are persisted to storage. Smaller segments provide lower archive playback latency but generate larger volume of storage transactions. Larger segments reduce the amount of storage transactions while increasing the archive playback latency. Value must be specified in ISO8601 duration format (i.e. "PT30S" equals 30 seconds) and can vary between 30 seconds to 5 minutes, in 30 seconds increments. Changing this value after the initial call to create the video resource can lead to errors when uploading content to the archive. Default value is 30 seconds. This property is only allowed for topologies where "kind" is set to "live".
- title str
- Optional title provided by the user. Value can be up to 256 characters long.
- description String
- Optional description provided by the user. Value can be up to 2048 characters long.
- retention
Period String - Video retention period indicates how long the video is kept in storage. Value must be specified in ISO8601 duration format (i.e. "P1D" equals 1 day) and can vary between 1 day to 10 years, in 1 day increments. When absent (null), all video content is retained indefinitely. This property is only allowed for topologies where "kind" is set to "live".
- segment
Length String - Segment length indicates the length of individual content files (segments) which are persisted to storage. Smaller segments provide lower archive playback latency but generate larger volume of storage transactions. Larger segments reduce the amount of storage transactions while increasing the archive playback latency. Value must be specified in ISO8601 duration format (i.e. "PT30S" equals 30 seconds) and can vary between 30 seconds to 5 minutes, in 30 seconds increments. Changing this value after the initial call to create the video resource can lead to errors when uploading content to the archive. Default value is 30 seconds. This property is only allowed for topologies where "kind" is set to "live".
- title String
- Optional title provided by the user. Value can be up to 256 characters long.
VideoCreationPropertiesResponse, VideoCreationPropertiesResponseArgs
- Description string
- Optional description provided by the user. Value can be up to 2048 characters long.
- Retention
Period string - Video retention period indicates how long the video is kept in storage. Value must be specified in ISO8601 duration format (i.e. "P1D" equals 1 day) and can vary between 1 day to 10 years, in 1 day increments. When absent (null), all video content is retained indefinitely. This property is only allowed for topologies where "kind" is set to "live".
- Segment
Length string - Segment length indicates the length of individual content files (segments) which are persisted to storage. Smaller segments provide lower archive playback latency but generate larger volume of storage transactions. Larger segments reduce the amount of storage transactions while increasing the archive playback latency. Value must be specified in ISO8601 duration format (i.e. "PT30S" equals 30 seconds) and can vary between 30 seconds to 5 minutes, in 30 seconds increments. Changing this value after the initial call to create the video resource can lead to errors when uploading content to the archive. Default value is 30 seconds. This property is only allowed for topologies where "kind" is set to "live".
- Title string
- Optional title provided by the user. Value can be up to 256 characters long.
- Description string
- Optional description provided by the user. Value can be up to 2048 characters long.
- Retention
Period string - Video retention period indicates how long the video is kept in storage. Value must be specified in ISO8601 duration format (i.e. "P1D" equals 1 day) and can vary between 1 day to 10 years, in 1 day increments. When absent (null), all video content is retained indefinitely. This property is only allowed for topologies where "kind" is set to "live".
- Segment
Length string - Segment length indicates the length of individual content files (segments) which are persisted to storage. Smaller segments provide lower archive playback latency but generate larger volume of storage transactions. Larger segments reduce the amount of storage transactions while increasing the archive playback latency. Value must be specified in ISO8601 duration format (i.e. "PT30S" equals 30 seconds) and can vary between 30 seconds to 5 minutes, in 30 seconds increments. Changing this value after the initial call to create the video resource can lead to errors when uploading content to the archive. Default value is 30 seconds. This property is only allowed for topologies where "kind" is set to "live".
- Title string
- Optional title provided by the user. Value can be up to 256 characters long.
- description String
- Optional description provided by the user. Value can be up to 2048 characters long.
- retention
Period String - Video retention period indicates how long the video is kept in storage. Value must be specified in ISO8601 duration format (i.e. "P1D" equals 1 day) and can vary between 1 day to 10 years, in 1 day increments. When absent (null), all video content is retained indefinitely. This property is only allowed for topologies where "kind" is set to "live".
- segment
Length String - Segment length indicates the length of individual content files (segments) which are persisted to storage. Smaller segments provide lower archive playback latency but generate larger volume of storage transactions. Larger segments reduce the amount of storage transactions while increasing the archive playback latency. Value must be specified in ISO8601 duration format (i.e. "PT30S" equals 30 seconds) and can vary between 30 seconds to 5 minutes, in 30 seconds increments. Changing this value after the initial call to create the video resource can lead to errors when uploading content to the archive. Default value is 30 seconds. This property is only allowed for topologies where "kind" is set to "live".
- title String
- Optional title provided by the user. Value can be up to 256 characters long.
- description string
- Optional description provided by the user. Value can be up to 2048 characters long.
- retention
Period string - Video retention period indicates how long the video is kept in storage. Value must be specified in ISO8601 duration format (i.e. "P1D" equals 1 day) and can vary between 1 day to 10 years, in 1 day increments. When absent (null), all video content is retained indefinitely. This property is only allowed for topologies where "kind" is set to "live".
- segment
Length string - Segment length indicates the length of individual content files (segments) which are persisted to storage. Smaller segments provide lower archive playback latency but generate larger volume of storage transactions. Larger segments reduce the amount of storage transactions while increasing the archive playback latency. Value must be specified in ISO8601 duration format (i.e. "PT30S" equals 30 seconds) and can vary between 30 seconds to 5 minutes, in 30 seconds increments. Changing this value after the initial call to create the video resource can lead to errors when uploading content to the archive. Default value is 30 seconds. This property is only allowed for topologies where "kind" is set to "live".
- title string
- Optional title provided by the user. Value can be up to 256 characters long.
- description str
- Optional description provided by the user. Value can be up to 2048 characters long.
- retention_
period str - Video retention period indicates how long the video is kept in storage. Value must be specified in ISO8601 duration format (i.e. "P1D" equals 1 day) and can vary between 1 day to 10 years, in 1 day increments. When absent (null), all video content is retained indefinitely. This property is only allowed for topologies where "kind" is set to "live".
- segment_
length str - Segment length indicates the length of individual content files (segments) which are persisted to storage. Smaller segments provide lower archive playback latency but generate larger volume of storage transactions. Larger segments reduce the amount of storage transactions while increasing the archive playback latency. Value must be specified in ISO8601 duration format (i.e. "PT30S" equals 30 seconds) and can vary between 30 seconds to 5 minutes, in 30 seconds increments. Changing this value after the initial call to create the video resource can lead to errors when uploading content to the archive. Default value is 30 seconds. This property is only allowed for topologies where "kind" is set to "live".
- title str
- Optional title provided by the user. Value can be up to 256 characters long.
- description String
- Optional description provided by the user. Value can be up to 2048 characters long.
- retention
Period String - Video retention period indicates how long the video is kept in storage. Value must be specified in ISO8601 duration format (i.e. "P1D" equals 1 day) and can vary between 1 day to 10 years, in 1 day increments. When absent (null), all video content is retained indefinitely. This property is only allowed for topologies where "kind" is set to "live".
- segment
Length String - Segment length indicates the length of individual content files (segments) which are persisted to storage. Smaller segments provide lower archive playback latency but generate larger volume of storage transactions. Larger segments reduce the amount of storage transactions while increasing the archive playback latency. Value must be specified in ISO8601 duration format (i.e. "PT30S" equals 30 seconds) and can vary between 30 seconds to 5 minutes, in 30 seconds increments. Changing this value after the initial call to create the video resource can lead to errors when uploading content to the archive. Default value is 30 seconds. This property is only allowed for topologies where "kind" is set to "live".
- title String
- Optional title provided by the user. Value can be up to 256 characters long.
VideoEncoderH264, VideoEncoderH264Args
- Bitrate
Kbps string - The maximum bitrate, in kilobits per second or Kbps, at which video should be encoded. If omitted, encoder sets it automatically to try and match the quality of the input video.
- Frame
Rate string - The frame rate (in frames per second) of the encoded video. The value must be greater than zero, and less than or equal to 300. If omitted, the encoder uses the average frame rate of the input video.
- Scale
Pulumi.
Azure Native. Video Analyzer. Inputs. Video Scale - Describes the resolution of the encoded video. If omitted, the encoder uses the resolution of the input video.
- Bitrate
Kbps string - The maximum bitrate, in kilobits per second or Kbps, at which video should be encoded. If omitted, encoder sets it automatically to try and match the quality of the input video.
- Frame
Rate string - The frame rate (in frames per second) of the encoded video. The value must be greater than zero, and less than or equal to 300. If omitted, the encoder uses the average frame rate of the input video.
- Scale
Video
Scale - Describes the resolution of the encoded video. If omitted, the encoder uses the resolution of the input video.
- bitrate
Kbps String - The maximum bitrate, in kilobits per second or Kbps, at which video should be encoded. If omitted, encoder sets it automatically to try and match the quality of the input video.
- frame
Rate String - The frame rate (in frames per second) of the encoded video. The value must be greater than zero, and less than or equal to 300. If omitted, the encoder uses the average frame rate of the input video.
- scale
Video
Scale - Describes the resolution of the encoded video. If omitted, the encoder uses the resolution of the input video.
- bitrate
Kbps string - The maximum bitrate, in kilobits per second or Kbps, at which video should be encoded. If omitted, encoder sets it automatically to try and match the quality of the input video.
- frame
Rate string - The frame rate (in frames per second) of the encoded video. The value must be greater than zero, and less than or equal to 300. If omitted, the encoder uses the average frame rate of the input video.
- scale
Video
Scale - Describes the resolution of the encoded video. If omitted, the encoder uses the resolution of the input video.
- bitrate_
kbps str - The maximum bitrate, in kilobits per second or Kbps, at which video should be encoded. If omitted, encoder sets it automatically to try and match the quality of the input video.
- frame_
rate str - The frame rate (in frames per second) of the encoded video. The value must be greater than zero, and less than or equal to 300. If omitted, the encoder uses the average frame rate of the input video.
- scale
Video
Scale - Describes the resolution of the encoded video. If omitted, the encoder uses the resolution of the input video.
- bitrate
Kbps String - The maximum bitrate, in kilobits per second or Kbps, at which video should be encoded. If omitted, encoder sets it automatically to try and match the quality of the input video.
- frame
Rate String - The frame rate (in frames per second) of the encoded video. The value must be greater than zero, and less than or equal to 300. If omitted, the encoder uses the average frame rate of the input video.
- scale Property Map
- Describes the resolution of the encoded video. If omitted, the encoder uses the resolution of the input video.
VideoEncoderH264Response, VideoEncoderH264ResponseArgs
- Bitrate
Kbps string - The maximum bitrate, in kilobits per second or Kbps, at which video should be encoded. If omitted, encoder sets it automatically to try and match the quality of the input video.
- Frame
Rate string - The frame rate (in frames per second) of the encoded video. The value must be greater than zero, and less than or equal to 300. If omitted, the encoder uses the average frame rate of the input video.
- Scale
Pulumi.
Azure Native. Video Analyzer. Inputs. Video Scale Response - Describes the resolution of the encoded video. If omitted, the encoder uses the resolution of the input video.
- Bitrate
Kbps string - The maximum bitrate, in kilobits per second or Kbps, at which video should be encoded. If omitted, encoder sets it automatically to try and match the quality of the input video.
- Frame
Rate string - The frame rate (in frames per second) of the encoded video. The value must be greater than zero, and less than or equal to 300. If omitted, the encoder uses the average frame rate of the input video.
- Scale
Video
Scale Response - Describes the resolution of the encoded video. If omitted, the encoder uses the resolution of the input video.
- bitrate
Kbps String - The maximum bitrate, in kilobits per second or Kbps, at which video should be encoded. If omitted, encoder sets it automatically to try and match the quality of the input video.
- frame
Rate String - The frame rate (in frames per second) of the encoded video. The value must be greater than zero, and less than or equal to 300. If omitted, the encoder uses the average frame rate of the input video.
- scale
Video
Scale Response - Describes the resolution of the encoded video. If omitted, the encoder uses the resolution of the input video.
- bitrate
Kbps string - The maximum bitrate, in kilobits per second or Kbps, at which video should be encoded. If omitted, encoder sets it automatically to try and match the quality of the input video.
- frame
Rate string - The frame rate (in frames per second) of the encoded video. The value must be greater than zero, and less than or equal to 300. If omitted, the encoder uses the average frame rate of the input video.
- scale
Video
Scale Response - Describes the resolution of the encoded video. If omitted, the encoder uses the resolution of the input video.
- bitrate_
kbps str - The maximum bitrate, in kilobits per second or Kbps, at which video should be encoded. If omitted, encoder sets it automatically to try and match the quality of the input video.
- frame_
rate str - The frame rate (in frames per second) of the encoded video. The value must be greater than zero, and less than or equal to 300. If omitted, the encoder uses the average frame rate of the input video.
- scale
Video
Scale Response - Describes the resolution of the encoded video. If omitted, the encoder uses the resolution of the input video.
- bitrate
Kbps String - The maximum bitrate, in kilobits per second or Kbps, at which video should be encoded. If omitted, encoder sets it automatically to try and match the quality of the input video.
- frame
Rate String - The frame rate (in frames per second) of the encoded video. The value must be greater than zero, and less than or equal to 300. If omitted, the encoder uses the average frame rate of the input video.
- scale Property Map
- Describes the resolution of the encoded video. If omitted, the encoder uses the resolution of the input video.
VideoPublishingOptions, VideoPublishingOptionsArgs
- Disable
Archive string - When set to 'true' content will not be archived or recorded. This is used, for example, when the topology is used only for low latency video streaming. Default is 'false'. If set to 'true', then "disableRtspPublishing" must be set to 'false'.
- Disable
Rtsp stringPublishing - When set to 'true' the RTSP playback URL will not be published, disabling low latency streaming. This is used, for example, when the topology is used only for archiving content. Default is 'false'. If set to 'true', then "disableArchive" must be set to 'false'.
- Disable
Archive string - When set to 'true' content will not be archived or recorded. This is used, for example, when the topology is used only for low latency video streaming. Default is 'false'. If set to 'true', then "disableRtspPublishing" must be set to 'false'.
- Disable
Rtsp stringPublishing - When set to 'true' the RTSP playback URL will not be published, disabling low latency streaming. This is used, for example, when the topology is used only for archiving content. Default is 'false'. If set to 'true', then "disableArchive" must be set to 'false'.
- disable
Archive String - When set to 'true' content will not be archived or recorded. This is used, for example, when the topology is used only for low latency video streaming. Default is 'false'. If set to 'true', then "disableRtspPublishing" must be set to 'false'.
- disable
Rtsp StringPublishing - When set to 'true' the RTSP playback URL will not be published, disabling low latency streaming. This is used, for example, when the topology is used only for archiving content. Default is 'false'. If set to 'true', then "disableArchive" must be set to 'false'.
- disable
Archive string - When set to 'true' content will not be archived or recorded. This is used, for example, when the topology is used only for low latency video streaming. Default is 'false'. If set to 'true', then "disableRtspPublishing" must be set to 'false'.
- disable
Rtsp stringPublishing - When set to 'true' the RTSP playback URL will not be published, disabling low latency streaming. This is used, for example, when the topology is used only for archiving content. Default is 'false'. If set to 'true', then "disableArchive" must be set to 'false'.
- disable_
archive str - When set to 'true' content will not be archived or recorded. This is used, for example, when the topology is used only for low latency video streaming. Default is 'false'. If set to 'true', then "disableRtspPublishing" must be set to 'false'.
- disable_
rtsp_ strpublishing - When set to 'true' the RTSP playback URL will not be published, disabling low latency streaming. This is used, for example, when the topology is used only for archiving content. Default is 'false'. If set to 'true', then "disableArchive" must be set to 'false'.
- disable
Archive String - When set to 'true' content will not be archived or recorded. This is used, for example, when the topology is used only for low latency video streaming. Default is 'false'. If set to 'true', then "disableRtspPublishing" must be set to 'false'.
- disable
Rtsp StringPublishing - When set to 'true' the RTSP playback URL will not be published, disabling low latency streaming. This is used, for example, when the topology is used only for archiving content. Default is 'false'. If set to 'true', then "disableArchive" must be set to 'false'.
VideoPublishingOptionsResponse, VideoPublishingOptionsResponseArgs
- Disable
Archive string - When set to 'true' content will not be archived or recorded. This is used, for example, when the topology is used only for low latency video streaming. Default is 'false'. If set to 'true', then "disableRtspPublishing" must be set to 'false'.
- Disable
Rtsp stringPublishing - When set to 'true' the RTSP playback URL will not be published, disabling low latency streaming. This is used, for example, when the topology is used only for archiving content. Default is 'false'. If set to 'true', then "disableArchive" must be set to 'false'.
- Disable
Archive string - When set to 'true' content will not be archived or recorded. This is used, for example, when the topology is used only for low latency video streaming. Default is 'false'. If set to 'true', then "disableRtspPublishing" must be set to 'false'.
- Disable
Rtsp stringPublishing - When set to 'true' the RTSP playback URL will not be published, disabling low latency streaming. This is used, for example, when the topology is used only for archiving content. Default is 'false'. If set to 'true', then "disableArchive" must be set to 'false'.
- disable
Archive String - When set to 'true' content will not be archived or recorded. This is used, for example, when the topology is used only for low latency video streaming. Default is 'false'. If set to 'true', then "disableRtspPublishing" must be set to 'false'.
- disable
Rtsp StringPublishing - When set to 'true' the RTSP playback URL will not be published, disabling low latency streaming. This is used, for example, when the topology is used only for archiving content. Default is 'false'. If set to 'true', then "disableArchive" must be set to 'false'.
- disable
Archive string - When set to 'true' content will not be archived or recorded. This is used, for example, when the topology is used only for low latency video streaming. Default is 'false'. If set to 'true', then "disableRtspPublishing" must be set to 'false'.
- disable
Rtsp stringPublishing - When set to 'true' the RTSP playback URL will not be published, disabling low latency streaming. This is used, for example, when the topology is used only for archiving content. Default is 'false'. If set to 'true', then "disableArchive" must be set to 'false'.
- disable_
archive str - When set to 'true' content will not be archived or recorded. This is used, for example, when the topology is used only for low latency video streaming. Default is 'false'. If set to 'true', then "disableRtspPublishing" must be set to 'false'.
- disable_
rtsp_ strpublishing - When set to 'true' the RTSP playback URL will not be published, disabling low latency streaming. This is used, for example, when the topology is used only for archiving content. Default is 'false'. If set to 'true', then "disableArchive" must be set to 'false'.
- disable
Archive String - When set to 'true' content will not be archived or recorded. This is used, for example, when the topology is used only for low latency video streaming. Default is 'false'. If set to 'true', then "disableRtspPublishing" must be set to 'false'.
- disable
Rtsp StringPublishing - When set to 'true' the RTSP playback URL will not be published, disabling low latency streaming. This is used, for example, when the topology is used only for archiving content. Default is 'false'. If set to 'true', then "disableArchive" must be set to 'false'.
VideoScale, VideoScaleArgs
- Height string
- The desired output video height.
- Mode
string | Pulumi.
Azure Native. Video Analyzer. Video Scale Mode - Describes the video scaling mode to be applied. Default mode is 'Pad'. If the mode is 'Pad' or 'Stretch' then both width and height must be specified. Else if the mode is 'PreserveAspectRatio' then only one of width or height need be provided.
- Width string
- The desired output video width.
- Height string
- The desired output video height.
- Mode
string | Video
Scale Mode - Describes the video scaling mode to be applied. Default mode is 'Pad'. If the mode is 'Pad' or 'Stretch' then both width and height must be specified. Else if the mode is 'PreserveAspectRatio' then only one of width or height need be provided.
- Width string
- The desired output video width.
- height String
- The desired output video height.
- mode
String | Video
Scale Mode - Describes the video scaling mode to be applied. Default mode is 'Pad'. If the mode is 'Pad' or 'Stretch' then both width and height must be specified. Else if the mode is 'PreserveAspectRatio' then only one of width or height need be provided.
- width String
- The desired output video width.
- height string
- The desired output video height.
- mode
string | Video
Scale Mode - Describes the video scaling mode to be applied. Default mode is 'Pad'. If the mode is 'Pad' or 'Stretch' then both width and height must be specified. Else if the mode is 'PreserveAspectRatio' then only one of width or height need be provided.
- width string
- The desired output video width.
- height str
- The desired output video height.
- mode
str | Video
Scale Mode - Describes the video scaling mode to be applied. Default mode is 'Pad'. If the mode is 'Pad' or 'Stretch' then both width and height must be specified. Else if the mode is 'PreserveAspectRatio' then only one of width or height need be provided.
- width str
- The desired output video width.
- height String
- The desired output video height.
- mode
String | "Pad" | "Preserve
Aspect Ratio" | "Stretch" - Describes the video scaling mode to be applied. Default mode is 'Pad'. If the mode is 'Pad' or 'Stretch' then both width and height must be specified. Else if the mode is 'PreserveAspectRatio' then only one of width or height need be provided.
- width String
- The desired output video width.
VideoScaleMode, VideoScaleModeArgs
- Pad
- PadPads the video with black horizontal stripes (letterbox) or black vertical stripes (pillar-box) so the video is resized to the specified dimensions while not altering the content aspect ratio.
- Preserve
Aspect Ratio - PreserveAspectRatioPreserves the same aspect ratio as the input video. If only one video dimension is provided, the second dimension is calculated based on the input video aspect ratio. When 2 dimensions are provided, the video is resized to fit the most constraining dimension, considering the input video size and aspect ratio.
- Stretch
- StretchStretches the original video so it resized to the specified dimensions.
- Video
Scale Mode Pad - PadPads the video with black horizontal stripes (letterbox) or black vertical stripes (pillar-box) so the video is resized to the specified dimensions while not altering the content aspect ratio.
- Video
Scale Mode Preserve Aspect Ratio - PreserveAspectRatioPreserves the same aspect ratio as the input video. If only one video dimension is provided, the second dimension is calculated based on the input video aspect ratio. When 2 dimensions are provided, the video is resized to fit the most constraining dimension, considering the input video size and aspect ratio.
- Video
Scale Mode Stretch - StretchStretches the original video so it resized to the specified dimensions.
- Pad
- PadPads the video with black horizontal stripes (letterbox) or black vertical stripes (pillar-box) so the video is resized to the specified dimensions while not altering the content aspect ratio.
- Preserve
Aspect Ratio - PreserveAspectRatioPreserves the same aspect ratio as the input video. If only one video dimension is provided, the second dimension is calculated based on the input video aspect ratio. When 2 dimensions are provided, the video is resized to fit the most constraining dimension, considering the input video size and aspect ratio.
- Stretch
- StretchStretches the original video so it resized to the specified dimensions.
- Pad
- PadPads the video with black horizontal stripes (letterbox) or black vertical stripes (pillar-box) so the video is resized to the specified dimensions while not altering the content aspect ratio.
- Preserve
Aspect Ratio - PreserveAspectRatioPreserves the same aspect ratio as the input video. If only one video dimension is provided, the second dimension is calculated based on the input video aspect ratio. When 2 dimensions are provided, the video is resized to fit the most constraining dimension, considering the input video size and aspect ratio.
- Stretch
- StretchStretches the original video so it resized to the specified dimensions.
- PAD
- PadPads the video with black horizontal stripes (letterbox) or black vertical stripes (pillar-box) so the video is resized to the specified dimensions while not altering the content aspect ratio.
- PRESERVE_ASPECT_RATIO
- PreserveAspectRatioPreserves the same aspect ratio as the input video. If only one video dimension is provided, the second dimension is calculated based on the input video aspect ratio. When 2 dimensions are provided, the video is resized to fit the most constraining dimension, considering the input video size and aspect ratio.
- STRETCH
- StretchStretches the original video so it resized to the specified dimensions.
- "Pad"
- PadPads the video with black horizontal stripes (letterbox) or black vertical stripes (pillar-box) so the video is resized to the specified dimensions while not altering the content aspect ratio.
- "Preserve
Aspect Ratio" - PreserveAspectRatioPreserves the same aspect ratio as the input video. If only one video dimension is provided, the second dimension is calculated based on the input video aspect ratio. When 2 dimensions are provided, the video is resized to fit the most constraining dimension, considering the input video size and aspect ratio.
- "Stretch"
- StretchStretches the original video so it resized to the specified dimensions.
VideoScaleResponse, VideoScaleResponseArgs
- Height string
- The desired output video height.
- Mode string
- Describes the video scaling mode to be applied. Default mode is 'Pad'. If the mode is 'Pad' or 'Stretch' then both width and height must be specified. Else if the mode is 'PreserveAspectRatio' then only one of width or height need be provided.
- Width string
- The desired output video width.
- Height string
- The desired output video height.
- Mode string
- Describes the video scaling mode to be applied. Default mode is 'Pad'. If the mode is 'Pad' or 'Stretch' then both width and height must be specified. Else if the mode is 'PreserveAspectRatio' then only one of width or height need be provided.
- Width string
- The desired output video width.
- height String
- The desired output video height.
- mode String
- Describes the video scaling mode to be applied. Default mode is 'Pad'. If the mode is 'Pad' or 'Stretch' then both width and height must be specified. Else if the mode is 'PreserveAspectRatio' then only one of width or height need be provided.
- width String
- The desired output video width.
- height string
- The desired output video height.
- mode string
- Describes the video scaling mode to be applied. Default mode is 'Pad'. If the mode is 'Pad' or 'Stretch' then both width and height must be specified. Else if the mode is 'PreserveAspectRatio' then only one of width or height need be provided.
- width string
- The desired output video width.
- height str
- The desired output video height.
- mode str
- Describes the video scaling mode to be applied. Default mode is 'Pad'. If the mode is 'Pad' or 'Stretch' then both width and height must be specified. Else if the mode is 'PreserveAspectRatio' then only one of width or height need be provided.
- width str
- The desired output video width.
- height String
- The desired output video height.
- mode String
- Describes the video scaling mode to be applied. Default mode is 'Pad'. If the mode is 'Pad' or 'Stretch' then both width and height must be specified. Else if the mode is 'PreserveAspectRatio' then only one of width or height need be provided.
- width String
- The desired output video width.
VideoSequenceAbsoluteTimeMarkers, VideoSequenceAbsoluteTimeMarkersArgs
- Ranges string
- The sequence of datetime ranges. Example: '[["2021-10-05T03:30:00Z", "2021-10-05T03:40:00Z"]]'.
- Ranges string
- The sequence of datetime ranges. Example: '[["2021-10-05T03:30:00Z", "2021-10-05T03:40:00Z"]]'.
- ranges String
- The sequence of datetime ranges. Example: '[["2021-10-05T03:30:00Z", "2021-10-05T03:40:00Z"]]'.
- ranges string
- The sequence of datetime ranges. Example: '[["2021-10-05T03:30:00Z", "2021-10-05T03:40:00Z"]]'.
- ranges str
- The sequence of datetime ranges. Example: '[["2021-10-05T03:30:00Z", "2021-10-05T03:40:00Z"]]'.
- ranges String
- The sequence of datetime ranges. Example: '[["2021-10-05T03:30:00Z", "2021-10-05T03:40:00Z"]]'.
VideoSequenceAbsoluteTimeMarkersResponse, VideoSequenceAbsoluteTimeMarkersResponseArgs
- Ranges string
- The sequence of datetime ranges. Example: '[["2021-10-05T03:30:00Z", "2021-10-05T03:40:00Z"]]'.
- Ranges string
- The sequence of datetime ranges. Example: '[["2021-10-05T03:30:00Z", "2021-10-05T03:40:00Z"]]'.
- ranges String
- The sequence of datetime ranges. Example: '[["2021-10-05T03:30:00Z", "2021-10-05T03:40:00Z"]]'.
- ranges string
- The sequence of datetime ranges. Example: '[["2021-10-05T03:30:00Z", "2021-10-05T03:40:00Z"]]'.
- ranges str
- The sequence of datetime ranges. Example: '[["2021-10-05T03:30:00Z", "2021-10-05T03:40:00Z"]]'.
- ranges String
- The sequence of datetime ranges. Example: '[["2021-10-05T03:30:00Z", "2021-10-05T03:40:00Z"]]'.
VideoSink, VideoSinkArgs
- Inputs
List<Pulumi.
Azure Native. Video Analyzer. Inputs. Node Input> - An array of upstream node references within the topology to be used as inputs for this node.
- Name string
- Node name. Must be unique within the topology.
- Video
Name string - Name of a new or existing video resource used to capture and publish content. Note: if downstream of RTSP source, and if disableArchive is set to true, then no content is archived.
- Video
Creation Pulumi.Properties Azure Native. Video Analyzer. Inputs. Video Creation Properties - Optional video properties to be used in case a new video resource needs to be created on the service.
- Video
Publishing Pulumi.Options Azure Native. Video Analyzer. Inputs. Video Publishing Options - Options to change how the video sink publishes content via the video resource. This property is only allowed for topologies where "kind" is set to "live".
- Inputs
[]Node
Input - An array of upstream node references within the topology to be used as inputs for this node.
- Name string
- Node name. Must be unique within the topology.
- Video
Name string - Name of a new or existing video resource used to capture and publish content. Note: if downstream of RTSP source, and if disableArchive is set to true, then no content is archived.
- Video
Creation VideoProperties Creation Properties - Optional video properties to be used in case a new video resource needs to be created on the service.
- Video
Publishing VideoOptions Publishing Options - Options to change how the video sink publishes content via the video resource. This property is only allowed for topologies where "kind" is set to "live".
- inputs
List<Node
Input> - An array of upstream node references within the topology to be used as inputs for this node.
- name String
- Node name. Must be unique within the topology.
- video
Name String - Name of a new or existing video resource used to capture and publish content. Note: if downstream of RTSP source, and if disableArchive is set to true, then no content is archived.
- video
Creation VideoProperties Creation Properties - Optional video properties to be used in case a new video resource needs to be created on the service.
- video
Publishing VideoOptions Publishing Options - Options to change how the video sink publishes content via the video resource. This property is only allowed for topologies where "kind" is set to "live".
- inputs
Node
Input[] - An array of upstream node references within the topology to be used as inputs for this node.
- name string
- Node name. Must be unique within the topology.
- video
Name string - Name of a new or existing video resource used to capture and publish content. Note: if downstream of RTSP source, and if disableArchive is set to true, then no content is archived.
- video
Creation VideoProperties Creation Properties - Optional video properties to be used in case a new video resource needs to be created on the service.
- video
Publishing VideoOptions Publishing Options - Options to change how the video sink publishes content via the video resource. This property is only allowed for topologies where "kind" is set to "live".
- inputs
Sequence[Node
Input] - An array of upstream node references within the topology to be used as inputs for this node.
- name str
- Node name. Must be unique within the topology.
- video_
name str - Name of a new or existing video resource used to capture and publish content. Note: if downstream of RTSP source, and if disableArchive is set to true, then no content is archived.
- video_
creation_ Videoproperties Creation Properties - Optional video properties to be used in case a new video resource needs to be created on the service.
- video_
publishing_ Videooptions Publishing Options - Options to change how the video sink publishes content via the video resource. This property is only allowed for topologies where "kind" is set to "live".
- inputs List<Property Map>
- An array of upstream node references within the topology to be used as inputs for this node.
- name String
- Node name. Must be unique within the topology.
- video
Name String - Name of a new or existing video resource used to capture and publish content. Note: if downstream of RTSP source, and if disableArchive is set to true, then no content is archived.
- video
Creation Property MapProperties - Optional video properties to be used in case a new video resource needs to be created on the service.
- video
Publishing Property MapOptions - Options to change how the video sink publishes content via the video resource. This property is only allowed for topologies where "kind" is set to "live".
VideoSinkResponse, VideoSinkResponseArgs
- Inputs
List<Pulumi.
Azure Native. Video Analyzer. Inputs. Node Input Response> - An array of upstream node references within the topology to be used as inputs for this node.
- Name string
- Node name. Must be unique within the topology.
- Video
Name string - Name of a new or existing video resource used to capture and publish content. Note: if downstream of RTSP source, and if disableArchive is set to true, then no content is archived.
- Video
Creation Pulumi.Properties Azure Native. Video Analyzer. Inputs. Video Creation Properties Response - Optional video properties to be used in case a new video resource needs to be created on the service.
- Video
Publishing Pulumi.Options Azure Native. Video Analyzer. Inputs. Video Publishing Options Response - Options to change how the video sink publishes content via the video resource. This property is only allowed for topologies where "kind" is set to "live".
- Inputs
[]Node
Input Response - An array of upstream node references within the topology to be used as inputs for this node.
- Name string
- Node name. Must be unique within the topology.
- Video
Name string - Name of a new or existing video resource used to capture and publish content. Note: if downstream of RTSP source, and if disableArchive is set to true, then no content is archived.
- Video
Creation VideoProperties Creation Properties Response - Optional video properties to be used in case a new video resource needs to be created on the service.
- Video
Publishing VideoOptions Publishing Options Response - Options to change how the video sink publishes content via the video resource. This property is only allowed for topologies where "kind" is set to "live".
- inputs
List<Node
Input Response> - An array of upstream node references within the topology to be used as inputs for this node.
- name String
- Node name. Must be unique within the topology.
- video
Name String - Name of a new or existing video resource used to capture and publish content. Note: if downstream of RTSP source, and if disableArchive is set to true, then no content is archived.
- video
Creation VideoProperties Creation Properties Response - Optional video properties to be used in case a new video resource needs to be created on the service.
- video
Publishing VideoOptions Publishing Options Response - Options to change how the video sink publishes content via the video resource. This property is only allowed for topologies where "kind" is set to "live".
- inputs
Node
Input Response[] - An array of upstream node references within the topology to be used as inputs for this node.
- name string
- Node name. Must be unique within the topology.
- video
Name string - Name of a new or existing video resource used to capture and publish content. Note: if downstream of RTSP source, and if disableArchive is set to true, then no content is archived.
- video
Creation VideoProperties Creation Properties Response - Optional video properties to be used in case a new video resource needs to be created on the service.
- video
Publishing VideoOptions Publishing Options Response - Options to change how the video sink publishes content via the video resource. This property is only allowed for topologies where "kind" is set to "live".
- inputs
Sequence[Node
Input Response] - An array of upstream node references within the topology to be used as inputs for this node.
- name str
- Node name. Must be unique within the topology.
- video_
name str - Name of a new or existing video resource used to capture and publish content. Note: if downstream of RTSP source, and if disableArchive is set to true, then no content is archived.
- video_
creation_ Videoproperties Creation Properties Response - Optional video properties to be used in case a new video resource needs to be created on the service.
- video_
publishing_ Videooptions Publishing Options Response - Options to change how the video sink publishes content via the video resource. This property is only allowed for topologies where "kind" is set to "live".
- inputs List<Property Map>
- An array of upstream node references within the topology to be used as inputs for this node.
- name String
- Node name. Must be unique within the topology.
- video
Name String - Name of a new or existing video resource used to capture and publish content. Note: if downstream of RTSP source, and if disableArchive is set to true, then no content is archived.
- video
Creation Property MapProperties - Optional video properties to be used in case a new video resource needs to be created on the service.
- video
Publishing Property MapOptions - Options to change how the video sink publishes content via the video resource. This property is only allowed for topologies where "kind" is set to "live".
VideoSource, VideoSourceArgs
- Name string
- Node name. Must be unique within the topology.
- Time
Sequences Pulumi.Azure Native. Video Analyzer. Inputs. Video Sequence Absolute Time Markers - Describes a sequence of datetime ranges. The video source only picks up recorded media within these ranges.
- Video
Name string - Name of the Video Analyzer video resource to be used as the source.
- Name string
- Node name. Must be unique within the topology.
- Time
Sequences VideoSequence Absolute Time Markers - Describes a sequence of datetime ranges. The video source only picks up recorded media within these ranges.
- Video
Name string - Name of the Video Analyzer video resource to be used as the source.
- name String
- Node name. Must be unique within the topology.
- time
Sequences VideoSequence Absolute Time Markers - Describes a sequence of datetime ranges. The video source only picks up recorded media within these ranges.
- video
Name String - Name of the Video Analyzer video resource to be used as the source.
- name string
- Node name. Must be unique within the topology.
- time
Sequences VideoSequence Absolute Time Markers - Describes a sequence of datetime ranges. The video source only picks up recorded media within these ranges.
- video
Name string - Name of the Video Analyzer video resource to be used as the source.
- name str
- Node name. Must be unique within the topology.
- time_
sequences VideoSequence Absolute Time Markers - Describes a sequence of datetime ranges. The video source only picks up recorded media within these ranges.
- video_
name str - Name of the Video Analyzer video resource to be used as the source.
- name String
- Node name. Must be unique within the topology.
- time
Sequences Property Map - Describes a sequence of datetime ranges. The video source only picks up recorded media within these ranges.
- video
Name String - Name of the Video Analyzer video resource to be used as the source.
VideoSourceResponse, VideoSourceResponseArgs
- Name string
- Node name. Must be unique within the topology.
- Time
Sequences Pulumi.Azure Native. Video Analyzer. Inputs. Video Sequence Absolute Time Markers Response - Describes a sequence of datetime ranges. The video source only picks up recorded media within these ranges.
- Video
Name string - Name of the Video Analyzer video resource to be used as the source.
- Name string
- Node name. Must be unique within the topology.
- Time
Sequences VideoSequence Absolute Time Markers Response - Describes a sequence of datetime ranges. The video source only picks up recorded media within these ranges.
- Video
Name string - Name of the Video Analyzer video resource to be used as the source.
- name String
- Node name. Must be unique within the topology.
- time
Sequences VideoSequence Absolute Time Markers Response - Describes a sequence of datetime ranges. The video source only picks up recorded media within these ranges.
- video
Name String - Name of the Video Analyzer video resource to be used as the source.
- name string
- Node name. Must be unique within the topology.
- time
Sequences VideoSequence Absolute Time Markers Response - Describes a sequence of datetime ranges. The video source only picks up recorded media within these ranges.
- video
Name string - Name of the Video Analyzer video resource to be used as the source.
- name str
- Node name. Must be unique within the topology.
- time_
sequences VideoSequence Absolute Time Markers Response - Describes a sequence of datetime ranges. The video source only picks up recorded media within these ranges.
- video_
name str - Name of the Video Analyzer video resource to be used as the source.
- name String
- Node name. Must be unique within the topology.
- time
Sequences Property Map - Describes a sequence of datetime ranges. The video source only picks up recorded media within these ranges.
- video
Name String - Name of the Video Analyzer video resource to be used as the source.
Import
An existing resource can be imported using its type token, name, and identifier, e.g.
$ pulumi import azure-native:videoanalyzer:PipelineTopology pipelineTopology1 /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Media/videoAnalyzers/{accountName}/pipelineTopologies/{pipelineTopologyName}
To learn more about importing existing cloud resources, see Importing resources.
Package Details
- Repository
- Azure Native pulumi/pulumi-azure-native
- License
- Apache-2.0