Settings Results in 4 milliseconds

Faster inference for PyTorch models with OpenVINO Integration with Torch-ORT
Faster inference for PyTorch models with OpenVINO ...

Deep learning models are everywhere without us even realizing it. The number of AI use cases have been increasing exponentially with the rapid development of new algorithms, cheaper compute, and greater access to data. Almost every industry has deep learning applications, from healthcare to education to manufacturing, construction, and beyond. Many developers opt to use popular AI Frameworks like PyTorch, which simplifies the process of analyzing predictions, training models, leveraging data, and refining future results.PyTorch on AzureGet an enterprise-ready PyTorch experience in the cloud.Learn morePyTorch is a machine learning framework used for applications such as computer vision and natural language processing, originally developed by Meta AI and now a part of the Linux Foundation umbrella, under the name of PyTorch Foundation. PyTorch has a powerful, TorchScript-based implementation that transforms the model from eager to graph mode for deployment scenarios.One of the biggest challenges PyTorch developers face in their deep learning projects is model optimization and performance. Oftentimes, the question arises How can I improve the performance of my PyTorch models? As you might have read in our previous blog, Intel and Microsoft have joined hands to tackle this problem with OpenVINO Integration with Torch-ORT. Initially, Microsoft had released Torch-ORT, which focused on accelerating PyTorch model training using ONNX Runtime. Recently, this capability was extended to accelerate PyTorch model inferencing by using the OpenVINO toolkit on Intel central processing unit (CPU), graphical processing unit (GPU), and video processing unit (VPU) with just two lines of code.Figure 1 OpenVINO Integration with Torch-ORT Application Flow. This figure shows how OpenVINO Integration with Torch-ORT can be used in a Computer Vision Application.By adding just two lines of code, we achieved 2.15 times faster inference for PyTorch Inception V3 model on an 11th Gen Intel Core i7 processor1. In addition to Inception V3, we also see performance gains for many popular PyTorch models such as ResNet50, Roberta-Base, and more. Currently, OpenVINO Integration with Torch-ORT supports over 120 PyTorch models from popular model zoo's, like Torchvision and Hugging Face.Figure 2 FP32 Model Performance of OpenVINO Integration with Torch-ORT as compared to PyTorch. This chart shows average inference latency (in milliseconds) for 100 runs after 15 warm-up iterations on an 11th Gen Intel(R) Core (TM) i7-1185G7 @ 3.00GHz.FeaturesOpenVINO Integration with Torch-ORT introduces the following featuresInline conversion of static/dynamic input shape modelsGraph partitioningSupport for INT8 modelsDockerfiles/Docker ContainersInline conversion of static/dynamic input shape modelsOpenVINO Integration with Torch-ORT performs inferencing of PyTorch models by converting these models to ONNX inline and subsequently performing inference with OpenVINO Execution Provider. Currently, both static and dynamic input shape models are supported with OpenVINO Integration with Torch-ORT. You also have the ability to save the inline exported ONNX model using the DebugOptions API.Graph partitioningOpenVINO Integration with Torch-ORT supports many PyTorch models by leveraging the existing graph partitioning feature from ONNX Runtime. With this feature, the input model graph is divided into subgraphs depending on the operators supported by OpenVINO and the OpenVINO-compatible subgraphs run using OpenVINO Execution Provider and unsupported operators fall back to MLAS CPU Execution Provider.Support for INT8 modelsOpenVINO Integration with Torch-ORT extends the support for lower precision inference through post-training quantization (PTQ) technique. Using PTQ, developers can quantize their PyTorch models with Neural Network Compression Framework (NNCF) and then run inferencing with OpenVINO Integration with Torch-ORT. Note Currently, our INT8 model support is in the early stages, only including ResNet50 and MobileNetv2. We are continuously expanding our INT8 model coverage.Docker ContainersYou can now use OpenVINO Integration with Torch-ORT on Mac OS and Windows OS through Docker. Pre-built Docker images are readily available on Docker Hub for your convenience. With a simple docker pull, you will now be able to unleash the key to accelerating performance of PyTorch models. To build the docker image yourself, you can also find dockerfiles readily available on Github.Customer storyRoboflowRoboflow empowers ISVs to build their own computer vision applications and enables hundreds of thousands of developers with a rich catalog of services, models, and frameworks to further optimize their AI workloads on a variety of different Intel hardware. An easy-to-use developer toolkit to accelerate models, properly integrated with AI frameworks, such as OpenVINO integration with Torch-ORT provides the best of both worldsan increase in inference speed as well as the ability to reuse already created AI application code with minimal changes. The Roboflow team has showcased a case study that demonstrates performance gains with OpenVINO Integration with Torch-ORT as compared to Native PyTorch for YOLOv7 model on Intel CPU. The Roboflow team is continuing to actively test OpenVINO integration with Torch-ORT with the goal of enabling PyTorch developers in the Roboflow Community.Try it outTry out OpenVINO Integration with Torch-ORT through a collection of Jupyter Notebooks. Through these sample tutorials, you will see how to install OpenVINO Integration with Torch-ORT and accelerate performance for PyTorch models with just two additional lines of code. Stay in the PyTorch framework and leverage OpenVINO optimizationsit doesn't get much easier than this.Learn moreHere is a list of resources to help you learn moreGithub RepositorySample NotebooksSupported ModelsUsage GuidePyTorch on AzureNotes1Framework configuration ONNXRuntime 1.13.1Application configuration torch_ort_infer 1.13.1, python timeit module for timing inference of modelsInput Classification models torch.Tensor; NLP models Masked sentence; OD model .jpg imageApplication Metric Average Inference latency for 100 iterations calculated after 15 warmup iterationsPlatform Tiger LakeNumber of Nodes 1 Numa NodeNumber of Sockets 1CPU or Accelerator 11th Gen Intel(R) Core(TM) i7-1185G7 @ 3.00GHzCores/socket, Threads/socket or EU/socket 4, 2 Threads/Coreucode 0xa4HT EnabledTurbo EnabledBIOS Version TNTGLV57.9026.2020.0916.1340System DDR Mem Config slots / cap / run-speed 2/32 GB/2667 MT/sTotal Memory/Node (DDR+DCPMM) 64GBStorage boot Sabrent Rocket 4.0 500GB – size 465.8GOS Ubuntu 20.04.4 LTSKernel 5.15.0-1010-intel-iotgNotices and disclaimersPerformance varies by use, configuration, and other factors. Learn more at www.Intel.com/PerformanceIndex.Performance results are based on testing as of dates shown in configurations and may not reflect all publicly available updates. See backup for configuration details. No product or component can be absolutely secure.Your costs and results may vary.Intel technologies may require enabled hardware, software, or service activation.Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty arising from a course of performance, course of dealing, or usage in trade.Results have been estimated or simulated. Intel Corporation. Intel, the Intel logo, OpenVINO, and the OpenVINO logo are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.The post Faster inference for PyTorch models with OpenVINO Integration with Torch-ORT appeared first on Microsoft Open Source Blog.


Solved: docker-compose build asp.net core 3.0 erro ...
Category: Docker

Problem The .NET Core frameworks can be found at <pre class="language-mar ...


Views: 2666 Likes: 150
docker: Cannot connect to the Docker daemon at uni ...
Category: Docker

Question How do you resolve "docker Cannot connect to the Docker daemon at unix///var/run/doc ...


Views: 380 Likes: 108
What is New in DotNet 7 (Conf 2022 Best Videos)
Category: .Net 7

Here is what I learned in the new release of .Net 7 D ...


Views: 0 Likes: 33
The framework 'Microsoft.AspNetCore.App' version'3 ...
Category: Docker

Question How do you resolve the Error The framework 'Microsoft.AspNetC ...


Views: 1448 Likes: 146
Error while retrieving group coordinator. System.T ...
Category: Docker-Compose

Question "Error while retrieving group coordinator. System.Threading.Tasks.TaskCanceledException ...


Views: 0 Likes: 48
Build custom chatbot applications using OpenChatkit models on Amazon SageMaker
Build custom chatbot applications using OpenChatki ...

Open-source large language models (LLMs) have become popular, allowing researchers, developers, and organizations to access these models to foster innovation and experimentation. This encourages collaboration from the open-source community to contribute to developments and improvement of LLMs. Open-source LLMs provide transparency to the model architecture, training process, and training data, which allows researchers to understand how the model works and identify potential biases and address ethical concerns. These open-source LLMs are democratizing generative AI by making advanced natural language processing (NLP) technology available to a wide range of users to build mission-critical business applications. GPT-NeoX, LLaMA, Alpaca, GPT4All, Vicuna, Dolly, and OpenAssistant are some of the popular open-source LLMs. OpenChatKit is an open-source LLM used to build general-purpose and specialized chatbot applications, released by Together Computer in March 2023 under the Apache-2.0 license. This model allows developers to have more control over the chatbot’s behavior and tailor it to their specific applications. OpenChatKit provides a set of tools, base bot, and building blocks to build fully customized, powerful chatbots. The key components are as follows An instruction-tuned LLM, fine-tuned for chat from EleutherAI’s GPT-NeoX-20B with over 43 million instructions on 100% carbon negative compute. The GPT-NeoXT-Chat-Base-20B model is based on EleutherAI’s GPT-NeoX model, and is fine-tuned with data focusing on dialog-style interactions. Customization recipes to fine-tune the model to achieve high accuracy on your tasks. An extensible retrieval system enabling you to augment bot responses with information from a document repository, API, or other live-updating information source at inference time. A moderation model, fine-tuned from GPT-JT-6B, designed to filter which questions the bot responds to. The increasing scale and size of deep learning models present obstacles to successfully deploy these models in generative AI applications. To meet the demands for low latency and high throughput, it becomes essential to employ sophisticated methods like model parallelism and quantization. Lacking proficiency in the application of these methods, numerous users encounter difficulties in initiating the hosting of sizable models for generative AI use cases. In this post, we show how to deploy OpenChatKit models (GPT-NeoXT-Chat-Base-20B and GPT-JT-Moderation-6B) models on Amazon SageMaker using DJL Serving and open-source model parallel libraries like DeepSpeed and Hugging Face Accelerate. We use DJL Serving, which is a high-performance universal model serving solution powered by the Deep Java Library (DJL) that is programming language agnostic. We demonstrate how the Hugging Face Accelerate library simplifies deployment of large models into multiple GPUs, thereby reducing the burden of running LLMs in a distributed fashion. Let’s get started! Extensible retrieval system An extensible retrieval system is one of the key components of OpenChatKit. It enables you to customize the bot response based on a closed domain knowledge base. Although LLMs are able to retain factual knowledge in their model parameters and can achieve remarkable performance on downstream NLP tasks when fine-tuned, their capacity to access and predict closed domain knowledge accurately remains restricted. Therefore, when they’re presented with knowledge-intensive tasks, their performance suffers to that of task-specific architectures. You can use the OpenChatKit retrieval system to augment knowledge in their responses from external knowledge sources such as Wikipedia, document repositories, APIs, and other information sources. The retrieval system enables the chatbot to access current information by obtaining pertinent details in response to a specific query, thereby supplying the necessary context for the model to generate answers. To illustrate the functionality of this retrieval system, we provide support for an index of Wikipedia articles and offer example code demonstrating how to invoke a web search API for information retrieval. By following the provided documentation, you can integrate the retrieval system with any dataset or API during the inference process, allowing the chatbot to incorporate dynamically updated data into its responses. Moderation model Moderation models are important in chatbot applications to enforce content filtering, quality control, user safety, and legal and compliance reasons. Moderation is a difficult and subjective task, and depends a lot on the domain of the chatbot application. OpenChatKit provides tools to moderate the chatbot application and monitor input text prompts for any inappropriate content. The moderation model provides a good baseline that can be adapted and customized to various needs. OpenChatKit has a 6-billion-parameter moderation model, GPT-JT-Moderation-6B, which can moderate the chatbot to limit the inputs to the moderated subjects. Although the model itself does have some moderation built in, TogetherComputer trained a GPT-JT-Moderation-6B model with Ontocord.ai’s OIG-moderation dataset. This model runs alongside the main chatbot to check that both the user input and answer from the bot don’t contain inappropriate results. You can also use this to detect any out of domain questions to the chatbot and override when the question is not part of the chatbot’s domain. The following diagram illustrates the OpenChatKit workflow. Extensible retrieval system use cases Although we can apply this technique in various industries to build generative AI applications, for this post we discuss use cases in the financial industry. Retrieval augmented generation can be employed in financial research to automatically generate research reports on specific companies, industries, or financial products. By retrieving relevant information from internal knowledge bases, financial archives, news articles, and research papers, you can generate comprehensive reports that summarize key insights, financial metrics, market trends, and investment recommendations. You can use this solution to monitor and analyze financial news, market sentiment, and trends. Solution overview The following steps are involved to build a chatbot using OpenChatKit models and deploy them on SageMaker Download the chat base GPT-NeoXT-Chat-Base-20B model and package the model artifacts to be uploaded to Amazon Simple Storage Service (Amazon S3). Use a SageMaker large model inference (LMI) container, configure the properties, and set up custom inference code to deploy this model. Configure model parallel techniques and use inference optimization libraries in DJL serving properties. We will use Hugging Face Accelerate as the engine for DJL serving. Additionally, we define tensor parallel configurations to partition the model. Create a SageMaker model and endpoint configuration, and deploy the SageMaker endpoint. You can follow along by running the notebook in the GitHub repo. Download the OpenChatKit model First, we download the OpenChatKit base model. We use huggingface_hub and use snapshot_download to download the model, which downloads an entire repository at a given revision. Downloads are made concurrently to speed up the process. See the following code from huggingface_hub import snapshot_download from pathlib import Path import os # - This will download the model into the current directory where ever the jupyter notebook is running local_model_path = Path("./openchatkit") local_model_path.mkdir(exist_ok=True) model_name = "togethercomputer/GPT-NeoXT-Chat-Base-20B" # Only download pytorch checkpoint files allow_patterns = ["*.json", "*.pt", "*.bin", "*.txt", "*.model"] # - Leverage the snapshot library to donload the model since the model is stored in repository using LFS chat_model_download_path = snapshot_download( repo_id=model_name,#A user or an organization name and a repo name cache_dir=local_model_path, #Path to the folder where cached files are stored. allow_patterns=allow_patterns, #only files matching at least one pattern are downloaded. ) DJL Serving properties You can use SageMaker LMI containers to host large generative AI models with custom inference code without providing your own inference code. This is extremely useful when there is no custom preprocessing of the input data or postprocessing of the model’s predictions. You can also deploy a model using custom inference code. In this post, we demonstrate how to deploy OpenChatKit models with custom inference code. SageMaker expects the model artifacts in tar format. We create each OpenChatKit model with the following files serving.properties and model.py. The serving.properties configuration file indicates to DJL Serving which model parallelization and inference optimization libraries you would like to use. The following is a list of settings we use in this configuration file openchatkit/serving.properties engine = Python option.tensor_parallel_degree = 4 option.s3url = {{s3url}} This contains the following parameters engine – The engine for DJL to use. option.entryPoint – The entry point Python file or module. This should align with the engine that is being used. option.s3url – Set this to the URI of the S3 bucket that contains the model. option.modelid – If you want to download the model from huggingface.co, you can set option.modelid to the model ID of a pretrained model hosted inside a model repository on huggingface.co (https//huggingface.co/models). The container uses this model ID to download the corresponding model repository on huggingface.co. option.tensor_parallel_degree – Set this to the number of GPU devices over which DeepSpeed needs to partition the model. This parameter also controls the number of workers per model that will be started up when DJL Serving runs. For example, if we have an 8 GPU machine and we are creating eight partitions, then we will have one worker per model to serve the requests. It’s necessary to tune the parallelism degree and identify the optimal value for a given model architecture and hardware platform. We call this ability inference-adapted parallelism. Refer to Configurations and settings for an exhaustive list of options. OpenChatKit models The OpenChatKit base model implementation has the following four files model.py – This file implements the handling logic for the main OpenChatKit GPT-NeoX model. It receives the inference input request, loads the model, loads the Wikipedia index, and serves the response. Refer to model.py(created part of the notebook) for additional details. model.py uses the following key classes OpenChatKitService – This handles passing the data between the GPT-NeoX model, Faiss search, and conversation object. WikipediaIndex and Conversation objects are initialized and input chat conversations are sent to the index to search for relevant content from Wikipedia. This also generates a unique ID for each invocation if one is not supplied for the purpose of storing the prompts in Amazon DynamoDB. ChatModel – This class loads the model and tokenizer and generates the response. It handles partitioning the model across multiple GPUs using tensor_parallel_degree, and configures the dtypes and device_map. The prompts are passed to the model to generate responses. A stopping criteria StopWordsCriteria is configured for the generation to only produce the bot response on inference. ModerationModel – We use two moderation models in the ModerationModel class the input model to indicate to the chat model that the input is inappropriate to override the inference result, and the output model to override the inference result. We classify the input prompt and output response with the following possible labels casual needs caution needs intervention (this is flagged to be moderated by the model) possibly needs caution probably needs caution wikipedia_prepare.py – This file handles downloading and preparing the Wikipedia index. In this post, we use a Wikipedia index provided on Hugging Face datasets. To search the Wikipedia documents for relevant text, the index needs to be downloaded from Hugging Face because it’s not packaged elsewhere. The wikipedia_prepare.py file is responsible for handling the download when imported. Only a single process in the multiple that are running for inference can clone the repository. The rest wait until the files are present in the local file system. wikipedia.py – This file is used for searching the Wikipedia index for contextually relevant documents. The input query is tokenized and embeddings are created using mean_pooling. We compute cosine similarity distance metrics between the query embedding and the Wikipedia index to retrieve contextually relevant Wikipedia sentences. Refer to wikipedia.py for implementation details. #function to create sentence embedding using mean_pooling def mean_pooling(token_embeddings, mask) token_embeddings = token_embeddings.masked_fill(~mask[..., None].bool(), 0.0) sentence_embeddings = token_embeddings.sum(dim=1) / mask.sum(dim=1)[..., None] return sentence_embeddings #function to compute cosine similarity distance between 2 embeddings def cos_sim_2d(x, y) norm_x = x / np.linalg.norm(x, axis=1, keepdims=True) norm_y = y / np.linalg.norm(y, axis=1, keepdims=True) return np.matmul(norm_x, norm_y.T) conversation.py – This file is used for storing and retrieving the conversation thread in DynamoDB for passing to the model and user. conversation.py is adapted from the open-source OpenChatKit repository. This file is responsible for defining the object that stores the conversation turns between the human and the model. With this, the model is able to retain a session for the conversation, allowing a user to refer to previous messages. Because SageMaker endpoint invocations are stateless, this conversation needs to be stored in a location external to the endpoint instances. On startup, the instance creates a DynamoDB table if it doesn’t exist. All updates to the conversation are then stored in DynamoDB based on the session_id key, which is generated by the endpoint. Any invocation with a session ID will retrieve the associated conversation string and update it as required. Build an LMI inference container with custom dependencies The index search uses Facebook’s Faiss library for performing the similarity search. Because this isn’t included in the base LMI image, the container needs to be adapted to install this library. The following code defines a Dockerfile that installs Faiss from the source alongside other libraries needed by the bot endpoint. We use the sm-docker utility to build and push the image to Amazon Elastic Container Registry (Amazon ECR) from Amazon SageMaker Studio. Refer to Using the Amazon SageMaker Studio Image Build CLI to build container images from your Studio notebooks for more details. The DJL container doesn’t have Conda installed, so Faiss needs to be cloned and compiled from the source. To install Faiss, the dependencies for using the BLAS APIs and Python support need to be installed. After these packages are installed, Faiss is configured to use AVX2 and CUDA before being compiled with the Python extensions installed. pandas, fastparquet, boto3, and git-lfs are installed afterwards because these are required for downloading and reading the index files. FROM 763104351884.dkr.ecr.us-east-1.amazonaws.com/djl-inference0.21.0-deepspeed0.8.0-cu117 ARG FAISS_URL=https//github.com/facebookresearch/faiss.git RUN apt-get update && apt-get install -y git-lfs wget cmake pkg-config build-essential apt-utils RUN apt search openblas && apt-get install -y libopenblas-dev swig RUN git clone $FAISS_URL && \ cd faiss && \ cmake -B build . -DFAISS_OPT_LEVEL=avx2 -DCMAKE_CUDA_ARCHITECTURES="86" && \ make -C build -j faiss && \ make -C build -j swigfaiss && \ make -C build -j swigfaiss_avx2 && \ (cd build/faiss/python && python -m pip install ) RUN pip install pandas fastparquet boto3 && \ git lfs install --skip-repo && \ apt-get clean all Create the model Now that we have the Docker image in Amazon ECR, we can proceed with creating the SageMaker model object for the OpenChatKit models. We deploy GPT-NeoXT-Chat-Base-20B input and output moderation models using GPT-JT-Moderation-6B. Refer to create_model for more details. from sagemaker.utils import name_from_base chat_model_name = name_from_base(f"gpt-neoxt-chatbase-ds") print(chat_model_name) create_model_response = sm_client.create_model( ModelName=chat_model_name, ExecutionRoleArn=role, PrimaryContainer={ "Image" chat_inference_image_uri, "ModelDataUrl" s3_code_artifact, }, ) chat_model_arn = create_model_response["ModelArn"] print(f"Created Model {chat_model_arn}") Configure the endpoint Next, we define the endpoint configurations for the OpenChatKit models. We deploy the models using the ml.g5.12xlarge instance type. Refer to create_endpoint_config for more details. chat_endpoint_config_name = f"{chat_model_name}-config" chat_endpoint_name = f"{chat_model_name}-endpoint" chat_endpoint_config_response = sm_client.create_endpoint_config( EndpointConfigName=chat_endpoint_config_name, ProductionVariants=[ { "VariantName" "variant1", "ModelName" chat_model_name, "InstanceType" "ml.g5.12xlarge", "InitialInstanceCount" 1, "ContainerStartupHealthCheckTimeoutInSeconds" 3600, }, ], ) Deploy the endpoint Finally, we create an endpoint using the model and endpoint configuration we defined in the previous steps chat_create_endpoint_response = sm_client.create_endpoint( EndpointName=f"{chat_endpoint_name}", EndpointConfigName=chat_endpoint_config_name ) print(f"Created Endpoint {chat_create_endpoint_response['EndpointArn']},") Run inference from OpenChatKit models Now it’s time to send inference requests to the model and get the responses. We pass the input text prompt and model parameters such as temperature, top_k, and max_new_tokens. The quality of the chatbot responses is based on the parameters specified, so it’s recommended to benchmark model performance against these parameters to find the optimal setting for your use case. The input prompt is first sent to the input moderation model, and the output is sent to ChatModel to generate the responses. During this step, the model uses the Wikipedia index to retrieve contextually relevant sections to the model as the prompt to get domain-specific responses from the model. Finally, the model response is sent to the output moderation model to check for classification, and then the responses are returned. See the following code def chat(prompt, session_id=None, **kwargs) if session_id chat_response_model = smr_client.invoke_endpoint( EndpointName=chat_endpoint_name, Body=json.dumps( { "inputs" prompt, "parameters" { "temperature" 0.6, "top_k" 40, "max_new_tokens" 512, "session_id" session_id, "no_retrieval" True, }, } ), ContentType="application/json", ) else chat_response_model = smr_client.invoke_endpoint( EndpointName=chat_endpoint_name, Body=json.dumps( { "inputs" prompt, "parameters" { "temperature" 0.6, "top_k" 40, "max_new_tokens" 512, }, } ), ContentType="application/json", ) response = chat_response_model["Body"].read().decode("utf8") return response prompts = "What does a data engineer do?" chat(prompts) Refer to sample chat interactions below. Clean up Follow the instructions in the cleanup section of the to delete the resources provisioned as part of this post to avoid unnecessary charges. Refer to Amazon SageMaker Pricing for details about the cost of the inference instances. Conclusion In this post, we discussed the importance of open-source LLMs and how to deploy an OpenChatKit model on SageMaker to build next-generation chatbot applications. We discussed various components of OpenChatKit models, moderation models, and how to use an external knowledge source like Wikipedia for retrieval augmented generation (RAG) workflows. You can find step-by-step instructions in the GitHub notebook. Let us know about the amazing chatbot applications you’re building. Cheers! About the Authors Dhawal Patel is a Principal Machine Learning Architect at AWS. He has worked with organizations ranging from large enterprises to mid-sized startups on problems related to distributed computing, and Artificial Intelligence. He focuses on Deep learning including NLP and Computer Vision domains. He helps customers achieve high performance model inference on SageMaker. Vikram Elango is a Sr. AIML Specialist Solutions Architect at AWS, based in Virginia, US. He is currently focused on generative AI, LLMs, prompt engineering, large model inference optimization, and scaling ML across enterprises. Vikram helps financial and insurance industry customers with design and thought leadership to build and deploy machine learning applications at scale. In his spare time, he enjoys traveling, hiking, cooking, and camping with his family. Andrew Smith is a Cloud Support Engineer in the SageMaker, Vision & Other team at AWS, based in Sydney, Australia. He supports customers using many AI/ML services on AWS with expertise in working with Amazon SageMaker. Outside of work, he enjoys spending time with friends and family as well as learning about different technologies.


Announcing .NET Chiseled Containers
Announcing .NET Chiseled Containers

.NET chiseled Ubuntu container images are now GA and can be used in production, for .NET 6, 7, and 8. Canonical also announced the general availability of chiseled Ubuntu containers. Chiseled images are the result of a long-term partnership and design collaboration between Canonical and Microsoft. We announced chiseled containers just over a year ago, as a new direction. They are now ready for you to use in your production environment and to take advantage of the value they offer. The images are available in our container repos with the following tag 8.0-jammy-chiseled. .NET 6 and 7 variants differ only by version number. These images rely on Ubuntu 22.04 (Jammy Jellyfish), as referenced by jammy in the tag name. We made a few videos on this topic over the last year, which provide a great overview Chiselled Ubuntu Containers .NET Containers advancements in .NET 8 | .NET Conf 2023 .NET in Ubuntu and Chiseled Containers We also published a container workshop for .NET Conf that uses chiseled containers for many of its examples. The workshop also uses OCI publish, which pairs well with chiseled containers. Chiseled images General-purpose container images are not the future of cloud apps The premise of chiseled containers is that container images are the best deployment vehicle for cloud apps, but that typical images contain far too many components. Instead, we need to slice away all but the essential components. Chiseled container images do that. That helps — a lot — with size and security. The number one complaint category we hear about container images is around CVE management. It’s hard to do well. We’ve built automation that rebuilds .NET images within hours of Alpine, Debian, and Ubuntu base image updates on Docker Hub. That means the images we ship are always fresh. However, most users don’t have that automation, and end up with stale images in their registries that fail CVE scans, often asking why our images are stale (when they are not). We know because they send us their image scan reports. There has to be a better way. It’s really easy to demo the difference, using anchore/syft (using Docker). Those commands show us the number of “Linux components” in three images we publish, for Debian, Ubuntu, and Alpine, respectively. $ docker run --rm anchore/syft mcr.microsoft.com/dotnet/runtime8.0 | grep deb | wc -l 92 $ docker run --rm anchore/syft mcr.microsoft.com/dotnet/runtime8.0-jammy | grep deb | wc -l 105 $ docker run --rm anchore/syft mcr.microsoft.com/dotnet/runtime8.0-alpine | grep apk | wc -l 17 One can guess that it’s pretty easy for a CVE to apply to one of these images, given the number of components. In fact, .NET doesn’t use most of those components! Alpine shines here. Here’s the result for the same image, but chiseled. $ docker run --rm anchore/syft mcr.microsoft.com/dotnet/runtime8.0-jammy-chiseled | grep deb | wc -l 7 That gets us down to 7 components. In fact, the list is so short, we can just look at all of them. $ docker run --rm anchore/syft mcr.microsoft.com/dotnet/runtime8.0-jammy-chiseled | grep deb base-files 12ubuntu4.4 deb ca-certificates 20230311ubuntu0.22.04.1 deb libc6 2.35-0ubuntu3.4 deb libgcc-s1 12.3.0-1ubuntu1~22.04 deb libssl3 3.0.2-0ubuntu1.12 deb libstdc++6 12.3.0-1ubuntu1~22.04 deb zlib1g 11.2.11.dfsg-2ubuntu9.2 deb That’s a very limited set of quite common dependencies. For example, .NET uses OpenSSL, for everything crypto, including TLS connections. Some customers need FIPS compliance and are able to enable that since .NET relies on OpenSSL. Native AOT apps are similar, but need one less component. We care so much about limiting size and component count that we created an image just for it, removing libstdc++6. This image is new and still in preview. $ docker run --rm anchore/syft mcr.microsoft.com/dotnet/nightly/runtime-deps8.0-jammy-chiseled-aot | grep deb | wc -l 6 As expected, that image only contains 6 components. Chiseled images are also much smaller. We can see that, this time with (uncompressed) aspnet images. $ docker images --format "table {{.Repository}}\t{{.Tag}}\t{{.Size}}" | grep mcr.microsoft.com/dotnet/aspnet mcr.microsoft.com/dotnet/aspnet 8.0-jammy-chiseled 110MB mcr.microsoft.com/dotnet/aspnet 8.0-alpine 112MB mcr.microsoft.com/dotnet/aspnet 8.0-jammy 216MB mcr.microsoft.com/dotnet/aspnet 8.0 217MB In summary, Chiseled images Slice a little over 100MB (uncompressed) relative to existing Ubuntu images. Match the size of Alpine, the existing fan favorite for size. Are the smallest images we publish with glibc compatibility Contain the fewest components, reducing CVE exposure. Are a great choice for matching dev and prod, given the popularity of Ubuntu for dev machines. Have the strongest support offering of any image variant we publish. Distroless form factor We’ve been publishing container images for nearly a decade. Throughout that time, we’ve heard regular requests to make images smaller, to remove components, and to improve security. Even as we’ve improved .NET container images, we’ve continued to hear those requests. The fundamental problem is that we cannot change the base images we pull from Docker Hub (beyond adding to them). We needed something revolutionary to change this dynamic. Many users have pointed us to Google Distroless over the years. “Distroless” images contain only your application and its runtime dependencies. They do not contain package managers, shells or any other programs you would expect to find in a standard Linux distribution. That’s a great description, taken from the distroless repo. Google deconstructs various Linux distros, and builds them back as atoms, but only using the most necessary atoms to run apps. That’s brilliant. We have a strong philosophy of only taking artifacts from and aligning with the policies of upstream distros, specifically Alpine, Debian, and Ubuntu. That way, we have the same relationship with the upstream provider (like Ubuntu) as the user. If there is an issue (outside of .NET), we’re looking for resolution from the same singular party. That’s why we never adopted Google Distroless and have been waiting for something like Ubuntu chiseled, from the upstream provider. Ubuntu chiseled is a great expression of the distroless form factor. It delivers on the same goals as the original Google project, but comes from the distro itself. You might worry that this is all new and untested and that something is going to break. In fact, we’ve been delivering distroless images within Microsoft for a few years. At Microsoft, we use Mariner Linux. A few years ago, we asked the Mariner team to create a distroless solution for our shared users. Microsoft teams have been hosting apps in production using .NET + Mariner distroless images since then. That has worked out quite well. Security posture The two most critical components missing from these images are a shell and a package manager. Their absence really limits what an attacker can do. curl and wget are also missing from these images. One of the first things an attacker typically does is download a shell script (from a server they control) and then runs it. Yes, that really happens. That’s effectively impossible with these images. The required components to enable that are just not there and not acquirable. We also ship these images as non-root. $ docker inspect mcr.microsoft.com/dotnet/aspnet8.0-jammy-chiseled | grep User "User" "", "User" "1654", All new files and directories are created with a UID and GID of 0 — Dockerfile reference This change further constrains the type of operations that are allowed in these images. On one hand, a non-root user isn’t able to run apt install commands, but since apt isn’t even present, that doesn’t matter. More practically, the non-root user isn’t able to update app files. The app files will be copied into the container as root, making it impossible for the non-root user to alter them or to add files to the same directory. The non-root user only has read and execute permissions for the app. You might wonder why it took us so long to support these images after announcing them over a year ago. That’s a related and ironic story. Many image scanners rely on scanning the package manager database, but since the package manager was removed, the required database was removed too. Ooops! Instead, our friends at Canonical had to synthesize a package manager database file through other means, as opposed to bringing part of the package manager back just to enable scanning. In the end, we have an excellent solution that optimizes for security and for scanning, without needing to compromise either. All of the anchore/syft commands shown earlier in the post are evidence that the scanning solution works. App size There are multiple ways to control the size of container images. The following slide from our .NET Conf presentation demonstrates that with a sample app. There are two primary axis Base image, for framework-dependent apps. Publish option, for self-contained apps. On the left, the chiseled variants of aspnet result in significant size wins. The smaller chiseled image — “composite” — derives its additional reductions by building parts of the .NET runtime libraries in a more optimized way. That will be covered in more detail in a follow-up post. On the right, self-contained + trimming drops the image size a lot more, since all the unused .NET libraries are removed. Native AOT drops the image size to below 10MB. That’s a welcome and shocking surprise. Note that native AOT only works for console apps and services, not for web sites. That may change in a later release. You might be wondering how to select between these choices. Framework dependent deployment has the benefit of maximum layer sharing. That means sharing copies of .NET in your registry and within a single machine (if you host multiple .NET apps together). Build times are also shorter. Self-contained apps win on size and registry pull, but have more limited sharing (only runtime-deps is shared). Adoption Chiseled images are the biggest change to our container image portfolio since we added support for Alpine, several years ago. We recommend that users take a deeper look at this change. Note The chiseled images we publish don’t include ICU or tzdata, just like Alpine (except for “extra” images). Please comment on dotnet-docker #5014 if you need these libraries. Users adopting .NET 8 are the most obvious candidates for chiseled containers. You will already be making changes, so why not make one more change? In many cases, you’ll just be using a different image tag, with significant new benefits. Ubuntu and Debian users can achieve very significant size savings over the general-purpose images you’ve had available until now. We recommend that you give chiseled images serious considerations. Alpine users have always been very well served and that isn’t changing. We’ve also included a non-root user in .NET 8 Alpine images. We’d recommend that Alpine users first switch to non-root hosting and then consider the additional benefits of chiseled images. We expect that many Alpine users will remain happy with Alpine and others will prefer Ubuntu Chiseled now that it also has a small variant. It’s great to have options! We’ve often been asked if we’d ever switch our convenient version tags — like 8.0 — to Alpine or (now) Chiseled. Such a change would break many apps, so we commit to publishing Debian images for those tags, effectively forever. It’s straightforward for users to opt-in to one of our other image types. We believe equally in compatibility and choice, and we’re offering both. Summary We’ve been strong advocates of Ubuntu chiseled containers from the moment we saw the first demo. That demo has now graduated all the way to a GA release, with Canonical and Microsoft announcing availability together. This level of collaboration will continue as we support these new images and work together on what’s next. We’re looking forward to feedback, since there are likely interesting scenarios we haven’t yet considered. We’ve had the benefit of working closing with Canonical on this project and making these images available to .NET users first. However, we’re so enthusiastic about this project, we want all developers to have the opportunity to use chiseled images. We encourage other developer ecosystems to stongly consider offering chiseled images, like Java, Python, and Node.js. We’ve had recent requests for information on chiseled images, after the .NET Conf presentations. Perhaps a year from now, chiseled images will have become a common choice for many developers. Over time, we’ve seen the increasing customer challenge of operationally managing containers, largely related to CVE burden. We believe that chiseled images are a great solution for helping teams reduce cost and deploy apps with greater confidence. The post Announcing .NET Chiseled Containers appeared first on .NET Blog.


Faster inference for PyTorch models with OpenVINO Integration with Torch-ORT
Faster inference for PyTorch models with OpenVINO ...

Deep learning models are everywhere without us even realizing it. The number of AI use cases have been increasing exponentially with the rapid development of new algorithms, cheaper compute, and greater access to data. Almost every industry has deep learning applications, from healthcare to education to manufacturing, construction, and beyond. Many developers opt to use popular AI Frameworks like PyTorch, which simplifies the process of analyzing predictions, training models, leveraging data, and refining future results.PyTorch on AzureGet an enterprise-ready PyTorch experience in the cloud.Learn morePyTorch is a machine learning framework used for applications such as computer vision and natural language processing, originally developed by Meta AI and now a part of the Linux Foundation umbrella, under the name of PyTorch Foundation. PyTorch has a powerful, TorchScript-based implementation that transforms the model from eager to graph mode for deployment scenarios.One of the biggest challenges PyTorch developers face in their deep learning projects is model optimization and performance. Oftentimes, the question arises How can I improve the performance of my PyTorch models? As you might have read in our previous blog, Intel and Microsoft have joined hands to tackle this problem with OpenVINO Integration with Torch-ORT. Initially, Microsoft had released Torch-ORT, which focused on accelerating PyTorch model training using ONNX Runtime. Recently, this capability was extended to accelerate PyTorch model inferencing by using the OpenVINO toolkit on Intel central processing unit (CPU), graphical processing unit (GPU), and video processing unit (VPU) with just two lines of code.Figure 1 OpenVINO Integration with Torch-ORT Application Flow. This figure shows how OpenVINO Integration with Torch-ORT can be used in a Computer Vision Application.By adding just two lines of code, we achieved 2.15 times faster inference for PyTorch Inception V3 model on an 11th Gen Intel Core i7 processor1. In addition to Inception V3, we also see performance gains for many popular PyTorch models such as ResNet50, Roberta-Base, and more. Currently, OpenVINO Integration with Torch-ORT supports over 120 PyTorch models from popular model zoo's, like Torchvision and Hugging Face.Figure 2 FP32 Model Performance of OpenVINO Integration with Torch-ORT as compared to PyTorch. This chart shows average inference latency (in milliseconds) for 100 runs after 15 warm-up iterations on an 11th Gen Intel(R) Core (TM) i7-1185G7 @ 3.00GHz.FeaturesOpenVINO Integration with Torch-ORT introduces the following featuresInline conversion of static/dynamic input shape modelsGraph partitioningSupport for INT8 modelsDockerfiles/Docker ContainersInline conversion of static/dynamic input shape modelsOpenVINO Integration with Torch-ORT performs inferencing of PyTorch models by converting these models to ONNX inline and subsequently performing inference with OpenVINO Execution Provider. Currently, both static and dynamic input shape models are supported with OpenVINO Integration with Torch-ORT. You also have the ability to save the inline exported ONNX model using the DebugOptions API.Graph partitioningOpenVINO Integration with Torch-ORT supports many PyTorch models by leveraging the existing graph partitioning feature from ONNX Runtime. With this feature, the input model graph is divided into subgraphs depending on the operators supported by OpenVINO and the OpenVINO-compatible subgraphs run using OpenVINO Execution Provider and unsupported operators fall back to MLAS CPU Execution Provider.Support for INT8 modelsOpenVINO Integration with Torch-ORT extends the support for lower precision inference through post-training quantization (PTQ) technique. Using PTQ, developers can quantize their PyTorch models with Neural Network Compression Framework (NNCF) and then run inferencing with OpenVINO Integration with Torch-ORT. Note Currently, our INT8 model support is in the early stages, only including ResNet50 and MobileNetv2. We are continuously expanding our INT8 model coverage.Docker ContainersYou can now use OpenVINO Integration with Torch-ORT on Mac OS and Windows OS through Docker. Pre-built Docker images are readily available on Docker Hub for your convenience. With a simple docker pull, you will now be able to unleash the key to accelerating performance of PyTorch models. To build the docker image yourself, you can also find dockerfiles readily available on Github.Customer storyRoboflowRoboflow empowers ISVs to build their own computer vision applications and enables hundreds of thousands of developers with a rich catalog of services, models, and frameworks to further optimize their AI workloads on a variety of different Intel hardware. An easy-to-use developer toolkit to accelerate models, properly integrated with AI frameworks, such as OpenVINO integration with Torch-ORT provides the best of both worldsan increase in inference speed as well as the ability to reuse already created AI application code with minimal changes. The Roboflow team has showcased a case study that demonstrates performance gains with OpenVINO Integration with Torch-ORT as compared to Native PyTorch for YOLOv7 model on Intel CPU. The Roboflow team is continuing to actively test OpenVINO integration with Torch-ORT with the goal of enabling PyTorch developers in the Roboflow Community.Try it outTry out OpenVINO Integration with Torch-ORT through a collection of Jupyter Notebooks. Through these sample tutorials, you will see how to install OpenVINO Integration with Torch-ORT and accelerate performance for PyTorch models with just two additional lines of code. Stay in the PyTorch framework and leverage OpenVINO optimizationsit doesn't get much easier than this.Learn moreHere is a list of resources to help you learn moreGithub RepositorySample NotebooksSupported ModelsUsage GuidePyTorch on AzureNotes1Framework configuration ONNXRuntime 1.13.1Application configuration torch_ort_infer 1.13.1, python timeit module for timing inference of modelsInput Classification models torch.Tensor; NLP models Masked sentence; OD model .jpg imageApplication Metric Average Inference latency for 100 iterations calculated after 15 warmup iterationsPlatform Tiger LakeNumber of Nodes 1 Numa NodeNumber of Sockets 1CPU or Accelerator 11th Gen Intel(R) Core(TM) i7-1185G7 @ 3.00GHzCores/socket, Threads/socket or EU/socket 4, 2 Threads/Coreucode 0xa4HT EnabledTurbo EnabledBIOS Version TNTGLV57.9026.2020.0916.1340System DDR Mem Config slots / cap / run-speed 2/32 GB/2667 MT/sTotal Memory/Node (DDR+DCPMM) 64GBStorage boot Sabrent Rocket 4.0 500GB – size 465.8GOS Ubuntu 20.04.4 LTSKernel 5.15.0-1010-intel-iotgNotices and disclaimersPerformance varies by use, configuration, and other factors. Learn more at www.Intel.com/PerformanceIndex.Performance results are based on testing as of dates shown in configurations and may not reflect all publicly available updates. See backup for configuration details. No product or component can be absolutely secure.Your costs and results may vary.Intel technologies may require enabled hardware, software, or service activation.Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty arising from a course of performance, course of dealing, or usage in trade.Results have been estimated or simulated. Intel Corporation. Intel, the Intel logo, OpenVINO, and the OpenVINO logo are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.The post Faster inference for PyTorch models with OpenVINO Integration with Torch-ORT appeared first on Microsoft Open Source Blog.


Linux Subsystem Windows 10 Error Installing Docker ...
Category: Docker

Question How do you resolve "gpg can't connect to the agent IPC connect call ...


Views: 837 Likes: 102
You don't want to miss out on this [New Book is no ...
Category: General

Here is what is happening at ErnesTech.com this week.1. A brand new book is now publi ...


Views: 0 Likes: 31
rm: remove write-protected directory '/var/lib/doc ...
Category: Docker

Question What is this error all about? "<span style="background-color #f8cac6; ...


Views: 0 Likes: 33
Trying out MongoDB with EF Core using Testcontainers
Trying out MongoDB with EF Core using Testcontaine ...

Helping developers use both relational and non-relational databases effectively was one of the original tenets of EF Core. To this end, there has been an EF Core database provider for Azure Cosmos DB document databases for many years now. Recently, the EF Core team has been collaborating with engineers from MongoDB to bring support for MongoDB to EF Core. The initial result of this collaboration is the first preview release of the MongoDB provider for EF Core. In this post, we will try out the MongoDB provider for EF Core by using it to Map a C# object model to documents in a MongoDB database Use EF to save some documents to the database Write LINQ queries to retrieve documents from the database Make changes to a document and use EF’s change tracking to update the document The code shown in this post can be found on GitHub. Testcontainers It’s very easy to get a MongoDB database in the cloud that you can use to try things out. However, Testcontainers is another way to test code with different database systems which is particularly suited to Running automated tests against the database Creating standalone reproductions when reporting issues Trying out new things with minimal setup Testcontainers are distributed as NuGet packages that take care of running a container containing a configured ready-to-use database system. The containers use Docker or a Docker-alternative to run, so this may need to be installed on your machine if you don’t already have it. See Welcome to Testcontainers for .NET! for more details. Other than starting Docker, you don’t need to do anything else except import the NuGet package. The C# project We’ll use a simple console application to try out MongoDB with EF Core. This project needs two package references MongoDB.EntityFrameworkCore to install the EF Core provider. This package also transitives installs the common EF Core packages and the MongoDB.Driver package which is used by the EF Provider to access the MongoDB database. Testcontainers.MongoDb to install the pre-defined Testcontainer for MongoDB. The full csproj file looks like this <Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <TargetFramework>net7.0</TargetFramework> <ImplicitUsings>enable</ImplicitUsings> <Nullable>enable</Nullable> <RootNamespace /> </PropertyGroup> <ItemGroup> <PackageReference Include="Testcontainers.MongoDB" Version="3.5.0" /> <PackageReference Include="MongoDB.EntityFrameworkCore" Version="7.0.0-preview.1" /> </ItemGroup> </Project> Remember, the full project is available to download from GitHUb. The object model We’ll map a simple object model of customers and their addresses public class Customer { public Guid Id { get; set; } public required string Name { get; set; } public required Species Species { get; set; } public required ContactInfo ContactInfo { get; set; } } public class ContactInfo { public required Address ShippingAddress { get; set; } public Address? BillingAddress { get; set; } public required PhoneNumbers Phones { get; set; } } public class PhoneNumbers { public PhoneNumber? HomePhone { get; set; } public PhoneNumber? WorkPhone { get; set; } public PhoneNumber? MobilePhone { get; set; } } public class PhoneNumber { public required int CountryCode { get; set; } public required string Number { get; set; } } public class Address { public required string Line1 { get; set; } public string? Line2 { get; set; } public string? Line3 { get; set; } public required string City { get; set; } public required string Country { get; set; } public required string PostalCode { get; set; } } public enum Species { Human, Dog, Cat } Since MongoDB works with documents, we’re going to map this model to a top level Customer document, with the addresses and phone numbers embedded in this document. We’ll see how to do this in the next section. Creating the EF model EF works by building a model of the mapped CLR types, such as those for Customer, etc. in the previous section. This model defines the relationships between types in the model, as well as how each type maps to the database. Luckily there is not much to do here, since EF uses a set of model building conventions that generate a model based on input from both the model types and the database provider. This means that for relational databases, each type gets mapped to a different table by convention. For document databases like Azure CosmosDB and now MongoDB, only the top-level type (Customer in our example) is mapped to its own document. Other types referenced from the top-level types are, by-convention, included in the main document. This means that the only thing EF needs to know to build a model is the top-level type, and that the MongoDB provider should be used. We do this by defining a type that extends from DbContext. For example public class CustomersContext DbContext { private readonly MongoClient _client; public CustomersContext(MongoClient client) { _client = client; } protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder) => optionsBuilder.UseMongoDB(_client, "efsample"); public DbSet<Customer> Customers => Set<Customer>(); } In this DbContext class UseMongoDB is called, passing in the client driver and the database name. This tells EF Core to use the MongoDB provider when building the model and accessing the database. A DbSet<Customer> property that defines the top-level type for which documents should be modeled. We’ll see later how to create the MongoClient instance and use the DbContext. When we do, examining the model DebugView shows this Model EntityType ContactInfo Owned Properties CustomerId (no field, Guid) Shadow Required PK FK AfterSaveThrow Navigations BillingAddress (Address) ToDependent ContactInfo.BillingAddress#Address (Address) Phones (PhoneNumbers) ToDependent PhoneNumbers ShippingAddress (Address) ToDependent ContactInfo.ShippingAddress#Address (Address) Keys CustomerId PK Foreign keys ContactInfo {'CustomerId'} -> Customer {'Id'} Unique Ownership ToDependent ContactInfo Cascade EntityType ContactInfo.BillingAddress#Address (Address) CLR Type Address Owned Properties ContactInfoCustomerId (no field, Guid) Shadow Required PK FK AfterSaveThrow City (string) Required Country (string) Required Line1 (string) Required Line2 (string) Line3 (string) PostalCode (string) Required Keys ContactInfoCustomerId PK Foreign keys ContactInfo.BillingAddress#Address (Address) {'ContactInfoCustomerId'} -> ContactInfo {'CustomerId'} Unique Ownership ToDependent BillingAddress Cascade EntityType ContactInfo.ShippingAddress#Address (Address) CLR Type Address Owned Properties ContactInfoCustomerId (no field, Guid) Shadow Required PK FK AfterSaveThrow City (string) Required Country (string) Required Line1 (string) Required Line2 (string) Line3 (string) PostalCode (string) Required Keys ContactInfoCustomerId PK Foreign keys ContactInfo.ShippingAddress#Address (Address) {'ContactInfoCustomerId'} -> ContactInfo {'CustomerId'} Unique Ownership ToDependent ShippingAddress Cascade EntityType Customer Properties Id (Guid) Required PK AfterSaveThrow ValueGenerated.OnAdd Name (string) Required Species (Species) Required Navigations ContactInfo (ContactInfo) ToDependent ContactInfo Keys Id PK EntityType PhoneNumbers Owned Properties ContactInfoCustomerId (no field, Guid) Shadow Required PK FK AfterSaveThrow Navigations HomePhone (PhoneNumber) ToDependent PhoneNumbers.HomePhone#PhoneNumber (PhoneNumber) MobilePhone (PhoneNumber) ToDependent PhoneNumbers.MobilePhone#PhoneNumber (PhoneNumber) WorkPhone (PhoneNumber) ToDependent PhoneNumbers.WorkPhone#PhoneNumber (PhoneNumber) Keys ContactInfoCustomerId PK Foreign keys PhoneNumbers {'ContactInfoCustomerId'} -> ContactInfo {'CustomerId'} Unique Ownership ToDependent Phones Cascade EntityType PhoneNumbers.HomePhone#PhoneNumber (PhoneNumber) CLR Type PhoneNumber Owned Properties PhoneNumbersContactInfoCustomerId (no field, Guid) Shadow Required PK FK AfterSaveThrow CountryCode (int) Required Number (string) Required Keys PhoneNumbersContactInfoCustomerId PK Foreign keys PhoneNumbers.HomePhone#PhoneNumber (PhoneNumber) {'PhoneNumbersContactInfoCustomerId'} -> PhoneNumbers {'ContactInfoCustomerId'} Unique Ownership ToDependent HomePhone Cascade EntityType PhoneNumbers.MobilePhone#PhoneNumber (PhoneNumber) CLR Type PhoneNumber Owned Properties PhoneNumbersContactInfoCustomerId (no field, Guid) Shadow Required PK FK AfterSaveThrow CountryCode (int) Required Number (string) Required Keys PhoneNumbersContactInfoCustomerId PK Foreign keys PhoneNumbers.MobilePhone#PhoneNumber (PhoneNumber) {'PhoneNumbersContactInfoCustomerId'} -> PhoneNumbers {'ContactInfoCustomerId'} Unique Ownership ToDependent MobilePhone Cascade EntityType PhoneNumbers.WorkPhone#PhoneNumber (PhoneNumber) CLR Type PhoneNumber Owned Properties PhoneNumbersContactInfoCustomerId (no field, Guid) Shadow Required PK FK AfterSaveThrow CountryCode (int) Required Number (string) Required Keys PhoneNumbersContactInfoCustomerId PK Foreign keys PhoneNumbers.WorkPhone#PhoneNumber (PhoneNumber) {'PhoneNumbersContactInfoCustomerId'} -> PhoneNumbers {'ContactInfoCustomerId'} Unique Ownership ToDependent WorkPhone Cascade Looking at this model, it can be seen that EF created owned entity types for the ContactInfo, Address, PhoneNumber and PhoneNumbers types, even though only the Customer type was referenced directly from the DbContext. These other types were discovered and configured by the model-building conventions. Create the MongoDB test container We now have a model and a DbContext. Next we need an actual MongoDB database, and this is where Testcontainers come in. There are Testcontainers available for many different types of database, and they all work in a very similar way. That is, a container is created using the appropriate DbBuilder, and then that container is started. For example await using var mongoContainer = new MongoDbBuilder() .WithImage("mongo6.0") .Build(); await mongoContainer.StartAsync(); And that’s it! We now have a configured, clean MongoDB instance running locally with which we can do what we wish, before just throwing it away. Save data to MongoDB Let’s use EF Core to write some data to the MongoDB database. To do this, we’ll need to create a DbContext instance, and for this we need a MongoClient instance from the underlying MongoDB driver. Often, in a real app, the MongoClient instance and the DbContext instance will be obtained using dependency injection. For the sake of simplicity, we’ll just new them up here var mongoClient = new MongoClient(mongoContainer.GetConnectionString()); await using (var context = new CustomersContext(mongoClient)) { // ... } Notice that the Testcontainer instance provides the connection string we need to connect to our MongoDB test database. To save a new Customer document, we’ll use Add to start tracking the document, and then call SaveChangesAsync to insert it into the database. await using (var context = new CustomersContext(mongoClient)) { var customer = new Customer { Name = "Willow", Species = Species.Dog, ContactInfo = new() { ShippingAddress = new() { Line1 = "Barking Gate", Line2 = "Chalk Road", City = "Walpole St Peter", Country = "UK", PostalCode = "PE14 7QQ" }, BillingAddress = new() { Line1 = "15a Main St", City = "Ailsworth", Country = "UK", PostalCode = "PE5 7AF" }, Phones = new() { HomePhone = new() { CountryCode = 44, Number = "7877 555 555" }, MobilePhone = new() { CountryCode = 1, Number = "(555) 2345-678" }, WorkPhone = new() { CountryCode = 1, Number = "(555) 2345-678" } } } }; context.Add(customer); await context.SaveChangesAsync(); } If we look at the JSON (actually, BSON, which is a more efficient binary representation for JSON documents) document created in the database, we can see it contains nested documents for all the contact information. This is different from what EF Core would do for a relational database, where each type would have been mapped to its own top-level table. { "_id" "CSUUID(\"9a97fd67-515f-4586-a024-cf82336fc64f\")", "Name" "Willow", "Species" 1, "ContactInfo" { "BillingAddress" { "City" "Ailsworth", "Country" "UK", "Line1" "15a Main St", "Line2" null, "Line3" null, "PostalCode" "PE5 7AF" }, "Phones" { "HomePhone" { "CountryCode" 44, "Number" "7877 555 555" }, "MobilePhone" { "CountryCode" 1, "Number" "(555) 2345-678" }, "WorkPhone" { "CountryCode" 1, "Number" "(555) 2345-678" } }, "ShippingAddress" { "City" "Walpole St Peter", "Country" "UK", "Line1" "Barking Gate", "Line2" "Chalk Road", "Line3" null, "PostalCode" "PE14 7QQ" } } } Using LINQ queries EF Core supports LINQ for querying data. For example, to query a single customer using (var context = new CustomersContext(mongoClient)) { var customer = await context.Customers.SingleAsync(c => c.Name == "Willow"); var address = customer.ContactInfo.ShippingAddress; var mobile = customer.ContactInfo.Phones.MobilePhone; Console.WriteLine($"{customer.Id} {customer.Name}"); Console.WriteLine($" Shipping to {address.City}, {address.Country} (+{mobile.CountryCode} {mobile.Number})"); } Running this code results in the following output 336d4936-d048-469e-84c8-d5ebc17754ff Willow Shipping to Walpole St Peter, UK (+1 (555) 2345-678) Notice that the query pulled back the entire document, not just the Customer object, so we are able to access and print out the customer’s contact info without going back to the database. Other LINQ operators can be used to perform filtering, etc. For example, to bring back all customers where the Species is Dog var customers = await context.Customers .Where(e => e.Species == Species.Dog) .ToListAsync(); Updating a document By default, EF tracks the object graphs returned from queries. Then, when SaveChanges or SaveChangesAsync is called, EF detects any changes that have been made to the document and sends an update to MongoDB to update that document. For example using (var context = new CustomersContext(mongoClient)) { var baxter = (await context.Customers.FindAsync(baxterId))!; baxter.ContactInfo.ShippingAddress = new() { Line1 = "Via Giovanni Miani", City = "Rome", Country = "IT", PostalCode = "00154" }; await context.SaveChangesAsync(); } In this case, we’re using FindAsync to query a customer by primary key–a LINQ query would work just as well. After that, we change the shipping address to Rome, and call SaveChangesAsync. EF detects that only the shipping address for a single document has been changed, and so sends a partial update to patch the updated address into the document stored in the MongoDB database. Going forward So far, the MongoDB provider for EF Core is only in its first preview. Full CRUD (creating, reading, updating, and deleting documents) is supported by this preview, but there are some limitations. See the readme on GitHub for more information, and for places to ask questions and file bugs. Learn more To learn more about EF Core and MongoDB See the EF Core documentation to learn more about using EF Core to access all kinds of databases. See the MongoDB documentation to learn more about using MongoDB from any platform. Watch Introducing the MongoDB provider for EF Core on the .NET Data Community Standup. Watch the upcoming Announcing MongoDB Provider for Entity Framework Core on the MongoDB livestream. Summary We used Testcontainers to try out the first preview release of the MongoDB provider for EF Core. Testcontainers allowed us to test MongoDB with very minimal setup, and we were able to create, query, and update documents in the MongoDB database using EF Core. The post Trying out MongoDB with EF Core using Testcontainers appeared first on .NET Blog.


Implement a multi-object tracking solution on a custom dataset with Amazon SageMaker
Implement a multi-object tracking solution on a cu ...

The demand for multi-object tracking (MOT) in video analysis has increased significantly in many industries, such as live sports, manufacturing, and traffic monitoring. For example, in live sports, MOT can track soccer players in real time to analyze physical performance such as real-time speed and moving distance. Since its introduction in 2021, ByteTrack remains to be one of best performing methods on various benchmark datasets, among the latest model developments in MOT application. In ByteTrack, the author proposed a simple, effective, and generic data association method (referred to as BYTE) for detection box and tracklet matching. Rather than only keep the high score detection boxes, it also keeps the low score detection boxes, which can help recover unmatched tracklets with these low score detection boxes when occlusion, motion blur, or size changing occurs. The BYTE association strategy can also be used in other Re-ID based trackers, such as FairMOT. The experiments showed improvements compared to the vanilla tracker algorithms. For example, FairMOT achieved an improvement of 1.3% on MOTA (FairMOT On the Fairness of Detection and Re-Identification in Multiple Object Tracking), which is one of the main metrics in the MOT task when applying BYTE in data association. In the post Train and deploy a FairMOT model with Amazon SageMaker, we demonstrated how to train and deploy a FairMOT model with Amazon SageMaker on the MOT challenge datasets. When applying a MOT solution in real-world cases, you need to train or fine-tune a MOT model on a custom dataset. With Amazon SageMaker Ground Truth, you can effectively create labels on your own video dataset. Following on the previous post, we have added the following contributions and modifications Generate labels for a custom video dataset using Ground Truth Preprocess the Ground Truth generated label to be compatible with ByteTrack and other MOT solutions Train the ByteTrack algorithm with a SageMaker training job (with the option to extend a pre-built container) Deploy the trained model with various deployment options, including asynchronous inference We also provide the code sample on GitHub, which uses SageMaker for labeling, building, training, and inference. SageMaker is a fully managed service that provides every developer and data scientist with the ability to prepare, build, train, and deploy machine learning (ML) models quickly. SageMaker provides several built-in algorithms and container images that you can use to accelerate training and deployment of ML models. Additionally, custom algorithms such as ByteTrack can also be supported via custom-built Docker container images. For more information about deciding on the right level of engagement with containers, refer to Using Docker containers with SageMaker. SageMaker provides plenty of options for model deployment, such as real-time inference, serverless inference, and asynchronous inference. In this post, we show how to deploy a tracking model with different deployment options, so that you can choose the suitable deployment method in your own use case. Overview of solution Our solution consists of the following high-level steps Label the dataset for tracking, with a bounding box on each object (for example, pedestrian, car, and so on). Set up the resources for ML code development and execution. Train a ByteTrack model and tune hyperparameters on a custom dataset. Deploy the trained ByteTrack model with different deployment options depending on your use case real-time processing, asynchronous, or batch prediction. The following diagram illustrates the architecture in each step. Prerequisites Before getting started, complete the following prerequisites Create an AWS account or use an existing AWS account. We recommend running the source code in the us-east-1 Region. Make sure that you have a minimum of one GPU instance (for example, ml.p3.2xlarge for single GPU training, or ml.p3.16xlarge) for the distributed training job. Other types of GPU instances are also supported, with various performance differences. Make sure that you have a minimum of one GPU instance (for example, ml.p3.2xlarge) for inference endpoint. Make sure that you have a minimum of one GPU instance (for example, ml.p3.2xlarge) for running batch prediction with processing jobs. If this is your first time running SageMaker services on the aforementioned instance types, you may have to request a quota increase for the required instances. Set up your resources After you complete all the prerequisites, you’re ready to deploy the solution. Create a SageMaker notebook instance. For this task, we recommend using the ml.t3.medium instance type. While running the code, we use docker build to extend the SageMaker training image with the ByteTrack code (the docker build command will be run locally within the notebook instance environment). Therefore, we recommend increasing the volume size to 100 GB (default volume size to 5 GB) from the advanced configuration options. For your AWS Identity and Access Management (IAM) role, choose an existing role or create a new role, and attach the AmazonS3FullAccess, AmazonSNSFullAccess, AmazonSageMakerFullAccess, and AmazonElasticContainerRegistryPublicFullAccess policies to the role. Clone the GitHub repo to the /home/ec2-user/SageMaker folder on the notebook instance you created. Create a new Amazon Simple Storage Service (Amazon S3) bucket or use an existing bucket. Label the dataset In the data-preparation.ipynb notebook, we download an MOT16 test video file and split the video file into small video files with 200 frames. Then we upload those video files to the S3 bucket as the data source for labeling. To label the dataset for the MOT task, refer to Getting started. When the labeling job is complete, we can access the following annotation directory at the job output location in the S3 bucket. The manifests directory should contain an output folder if we finished labeling all the files. We can see the file output.manifest in the output folder. This manifest file contains information about the video and video tracking labels that you can use later to train and test a model. Train a ByteTrack model and tune hyperparameters on the custom dataset To train your ByteTrack model, we use the bytetrack-training.ipynb notebook. The notebook consists of the following steps Initialize the SageMaker setting. Perform data preprocessing. Build and push the container image. Define a training job. Launch the training job. Tune hyperparameters. Especially in data preprocessing, we need to convert the labeled dataset with the Ground Truth output format to the MOT17 format dataset, and convert the MOT17 format dataset to a MSCOCO format dataset (as shown in the following figure) so that we can train a YOLOX model on the custom dataset. Because we keep both the MOT format dataset and MSCOCO format dataset, you can train other MOT algorithms without separating detection and tracking on the MOT format dataset. You can easily change the detector to other algorithms such as YOLO7 to use your existing object detection algorithm. Deploy the trained ByteTrack model After we train the YOLOX model, we deploy the trained model for inference. SageMaker provides several options for model deployment, such as real-time inference, asynchronous inference, serverless inference, and batch inference. In our post, we use the sample code for real-time inference, asynchronous inference, and batch inference. You can choose the suitable code from these options based on your own business requirements. Because SageMaker batch transform requires the data to be partitioned and stored on Amazon S3 as input and the invocations are sent to the inference endpoints concurrently, it doesn’t meet the requirements in object tracking tasks where the targets need to be sent in a sequential manner. Therefore, we don’t use the SageMaker batch transform jobs to run the batch inference. In this example, we use SageMaker processing jobs to do batch inference. The following table summarizes the configuration for our inference jobs. Inference Type Payload Processing Time Auto Scaling Real-time Up to 6 MB Up to 1 minute Minimum instance count is 1 or higher Asynchronous Up to 1 GB Up to 15 minutes Minimum instance count can be zero Batch (with processing job) No limit No limit Not supported Deploy a real-time inference endpoint To deploy a real-time inference endpoint, we can run the bytetrack-inference-yolox.ipynb notebook. We separate ByteTrack inference into object detection and tracking. In the inference endpoint, we only run the YOLOX model for object detection. In the notebook, we create a tracking object, receive the result of object detection from the inference endpoint, and update trackers. We use SageMaker PyTorchModel SDK to create and deploy a ByteTrack model as follows from sagemaker.pytorch.model import PyTorchModel pytorch_model = PyTorchModel( model_data=s3_model_uri, role=role, source_dir="sagemaker-serving/code", entry_point="inference.py", framework_version="1.7.1", py_version="py3", ) endpoint_name =<endpint name> pytorch_model.deploy( initial_instance_count=1, instance_type="ml.p3.2xlarge", endpoint_name=endpoint_name ) After we deploy the model to an endpoint successfully, we can invoke the inference endpoint with the following code snippet with open(f"datasets/frame_{frame_id}.png", "rb") as f payload = f.read() response = sm_runtime.invoke_endpoint( EndpointName=endpoint_name, ContentType="application/x-image", Body=payload ) outputs = json.loads(response["Body"].read().decode()) We run the tracking task on the client side after accepting the detection result from the endpoint (see the following code). By drawing the tracking results in each frame and saving as a tracking video, you can confirm the tracking result on the tracking video. aspect_ratio_thresh = 1.6 min_box_area = 10 tracker = BYTETracker( frame_rate=30, track_thresh=0.5, track_buffer=30, mot20=False, match_thresh=0.8 ) online_targets = tracker.update(torch.as_tensor(outputs[0]), [height, width], (800, 1440)) online_tlwhs = [] online_ids = [] online_scores = [] for t in online_targets tlwh = t.tlwh tid = t.track_id vertical = tlwh[2] / tlwh[3] > aspect_ratio_thresh if tlwh[2] * tlwh[3] > min_box_area and not vertical online_tlwhs.append(tlwh) online_ids.append(tid) online_scores.append(t.score) results.append( f"{frame_id},{tid},{tlwh[0].2f},{tlwh[1].2f},{tlwh[2].2f},{tlwh[3].2f},{t.score.2f},-1,-1,-1" ) online_im = plot_tracking( frame, online_tlwhs, online_ids, frame_id=frame_id + 1, fps=1. / timer.average_time ) Deploy an asynchronous inference endpoint SageMaker asynchronous inference is the ideal option for requests with large payload sizes (up to 1 GB), long processing times (up to 1 hour), and near-real-time latency requirements. For MOT tasks, it’s common that a video file is beyond 6 MB, which is the payload limit of a real-time endpoint. Therefore, we deploy an asynchronous inference endpoint. Refer to Asynchronous inference for more details of how to deploy an asynchronous endpoint. We can reuse the model created for the real-time endpoint; for this post, we put a tracking process into the inference script so that we can get the final tracking result directly for the input video. To use scripts related to ByteTrack on the endpoint, we need to put the tracking script and model into the same folder and compress the folder as the model.tar.gz file, and then upload it to the S3 bucket for model creation. The following diagram shows the structure of model.tar.gz. We need to explicitly set the request size, response size, and response timeout as the environment variables, as shown in the following code. The name of the environment variable varies depending on the framework. For more details, refer to Create an Asynchronous Inference Endpoint. pytorch_model = PyTorchModel( model_data=s3_model_uri, role=role, entry_point="inference.py", framework_version="1.7.1", sagemaker_session=sm_session, py_version="py3", env={ 'TS_MAX_REQUEST_SIZE' '1000000000', #default max request size is 6 Mb for torchserve, need to update it to support the 1GB input payload 'TS_MAX_RESPONSE_SIZE' '1000000000', 'TS_DEFAULT_RESPONSE_TIMEOUT' '900' # max timeout is 15mins (900 seconds) } ) pytorch_model.create( instance_type="ml.p3.2xlarge", ) When invoking the asynchronous endpoint, instead of sending the payload in the request, we send the Amazon S3 URL of the input video. When the model inference finishes processing the video, the results will be saved on the S3 output path. We can configure Amazon Simple Notification Service (Amazon SNS) topics so that when the results are ready, we can receive an SNS message as a notification. Run batch inference with SageMaker processing For video files bigger than 1 GB, we use a SageMaker processing job to do batch inference. We define a custom Docker container to run a SageMaker processing job (see the following code). We draw the tracking result on the input video. You can find the result video in the S3 bucket defined by s3_output. from sagemaker.processing import ProcessingInput, ProcessingOutput script_processor.run( code='./container-batch-inference/predict.py', inputs=[ ProcessingInput(source=s3_input, destination="/opt/ml/processing/input"), ProcessingInput(source=s3_model_uri, destination="/opt/ml/processing/model"), ], outputs=[ ProcessingOutput(source='/opt/ml/processing/output', destination=s3_output), ] ) Clean up To avoid unnecessary costs, delete the resources you created as part of this solution, including the inference endpoint. Conclusion This post demonstrated how to implement a multi-object tracking solution on a custom dataset using one of the state-of-the-art algorithms on SageMaker. We also demonstrated three deployment options on SageMaker so that you can choose the optimal option for your own business scenario. If the use case requires low latency and needs a model to be deployed on an edge device, you can deploy the MOT solution at the edge with AWS Panorama. For more information, refer to Multi Object Tracking using YOLOX + BYTE-TRACK and data analysis. About the Authors Gordon Wang, is a Senior AI/ML Specialist TAM at AWS. He supports strategic customers with AI/ML best practices cross many industries. He is passionate about computer vision, NLP, Generative AI and MLOps. In his spare time, he loves running and hiking. Yanwei Cui, PhD, is a Senior Machine Learning Specialist Solutions Architect at AWS. He started machine learning research at IRISA (Research Institute of Computer Science and Random Systems), and has several years of experience building artificial intelligence powered industrial applications in computer vision, natural language processing and online user behavior prediction. At AWS, he shares the domain expertise and helps customers to unlock business potentials, and to drive actionable outcomes with machine learning at scale. Outside of work, he enjoys reading and traveling. Melanie Li, PhD, is a Senior AI/ML Specialist TAM at AWS based in Sydney, Australia. She helps enterprise customers to build solutions leveraging the state-of-the-art AI/ML tools on AWS and provides guidance on architecting and implementing machine learning solutions with best practices. In her spare time, she loves to explore nature outdoors and spend time with family and friends. Guang Yang, is a Senior applied scientist at the Amazon ML Solutions Lab where he works with customers across various verticals and applies creative problem solving to generate value for customers with state-of-the-art ML/AI solutions.


Scaling Asp.Net 6 Application using Docker Swarm
Category: Docker

On 11/27/21 I worked on finding a way to scale Docker Containers with Docker Swarm, practiced on ...


Views: 0 Likes: 41
Docker Container Micro-Service Error: Can not Conn ...
Category: Docker

Problem Can not Connect to SQL Server in Docker Container from Microsoft Sql Server Management</ ...


Views: 257 Likes: 90
Docker Container Won't Connect to Host Network
Category: Docker

[NB] If the docker-compose up and the docker container won't connect to the host network, make su ...


Views: 423 Likes: 92
Docker-compose Error: ERROR: Couldn't connect to D ...
Category: Network

Question How do you resolve the docker-compose error &nbsp;sudo docker ...


Views: 651 Likes: 102
[Solved] Frigate NVR Python Error "Fatal Python er ...
Category: Other

Question How do you solve for error in <a class="text-decoration-none" href="https//www.erneste ...


Views: 0 Likes: 10
 The .NET Stacks #62: ?? And we&#x27;re back
The .NET Stacks #62 ?? And we&#x27;re back

This is the web version of my weekly newsletter, The .NET Stacks, originally sent to email subscribers on September 13, 2021. Subscribe at the bottom of the post to get this right away!Happy Monday! Miss me? A few of you said you have, but I'm 60% sure that's sarcasm.As you know, I took the last month or so off from the newsletter to focus on other things. I know I wasn't exactly specific on why, and appreciate some of you reaching out. I wasn't comfortable sharing it at the time, but I needed to take time away to focus on determining the next step in my career. If you've interviewed lately, I'm sure you understand ... it really is a full-time job.  I'm happy to say I've accepted a remote tech lead role for a SaaS company here. I'm rested and ready, so let's get into it! I'm trying something a little different this week—feel free to let me know what you think.?? My favorite from last weekASP.NET 6.0 Minimal APIs, why should you care?Ben FosterWe've talked about Minimal APIs a lot in this newsletter and it's quite the hot topic in the .NET community. An alternative way to write APIs in .NET 6 and beyond, there's a lot of folks wondering if it's suitable for production, or can lead to misuse. Ben notes "Minimal simply means that it contains the minimum set of components needed to build HTTP APIs ... It doesn’t mean that the application you build will be simple or not require good design.""I find that one of the biggest advantages to Minimal APIs is that they make it easier to build APIs in an opinionated way. After many years building HTTP services, I have a preferred approach. With MVC I would replace the built-in validation with Fluent Validation and my controllers were little more than a dispatch call to Mediatr. With Minimal APIs I get even more control. Of course if MVC offers everything you need, then use that."In a similar vein, Nick Chapsas has a great walkthrough on strategies for building production-ready Minimal APIs. No one expects your API to be in one file, and he shows practical ways to deal with dependencies while leveraging minimal API patterns. Damian Edwards has a nice Twitter thread, as well. As great as these community discussions are, I really think the greatest benefit is getting lost the performance gains.?? Community and eventsIncreasing developer happiness with GitHub code scanningSam PartingtonIf you work in GitHub, you probably already know that GitHub utilizes code scanning to find security vulnerabilities and errors in your repository. Sam Partington writes about something you might not know they use CodeQL—their internal code analysis engine—to protect themselves from common coding mistakes. Here's what Sam says about loopy performance issues "In addition to protecting against missing error checking, we also want to keep our database-querying code performant. N+1 queries are a common performance issue. This is where some expensive operation is performed once for every member of a set, so the code will get slower as the number of items increases. Database calls in a loop are often the culprit here; typically, you’ll get better performance from a batch query outside of the loop instead.""We created a custom CodeQL query ... We filter that list of calls down to those that happen within a loop and fail CI if any are encountered. What’s nice about CodeQL is that we’re not limited to database calls directly within the body of a loop?calls within functions called directly or indirectly from the loop are caught too."You can check out the post for more details and learn how to use these queries or make your own.More from last weekSimon Bisson writes about how to use the VS Code editor in your own projects.The Netflix Tech Blog starts a series on practical API design and also starts writing about their decision-making process.The .NET Docs Show talks about micr0 frontends with Blazor.For community standups, Entity Framework talks about OSS projects, ASP.NET has an anniversary, .NET MAUI discusses accessibility, and Machine Learning holds office hours.?? Web developmentHow To Map A Route in an ASP.NET Core MVC applicationKhalid AbuhakmehIf you're new to ASP.NET Core web development, Khalid put together a nice post on how to add an existing endpoint to an existing ASP.NET Core MVC app. Even if you aren't a beginner, you might learn how to resolve sticky routing issues. At the bottom of the post, he has a nice checklist you should consider when adding a new endpoint.More from last weekBen Foster explores custom model binding with Minimal APIs in .NET 6.Thomas Ardal debugs System.FormatException when launching ASP.NET Core.Jeremy Morgan builds a small web API with Azure Functions and SQLite.Ed Charbeneau works with low-code data grids and Blazor.Scott Hanselman works with a Minimal API todo app.?? The .NET platformUsing Source Generators with Blazor components in .NET 6Andrew LockWhen Andrew was upgrading a Blazor app to .NET 6, he found that source generators that worked in .NET 5 failed to discover Blazor components in his .NET 6 app because of changes to the Razor compilation process.He writes "The problem is that my source generators were relying on the output of the Razor compiler in .NET 5 ... My source generator was looking for components in the compilation that are decorated with [RouteAttribute]. With .NET 6, the Razor tooling is a source generator, so there is no 'first step'; the Razor tooling executes at the same time as my source generator. That is great for performance, but it means the files my source generator was relying on (the generated component classes) don't exist when my generator runs."While this is by design, Andrew has a great post underlying the issue and potential workarounds.More from last weekMark Downie writes about his favorite improvements in .NET 6.Sergey Vasiliev writes about optimizing .NET apps.Pawel Szydziak writes cleaner, safer code with SonarQube, Docker, and .NET Core.Sam Basu writes about how to develop for desktop in 2022, and also about developing for .NET MAUI on macOS.Paul Michaels manually parses a JSON string using System.Text.Json.Johnson Towoju writes logs to SQL Server using NLog.Andrew Lock uses source generators with Blazor components in .NET 6.Rick Strahl launches Visual Studio Code cleanly from a .NET app.Jirí Cincura calls a C# static constructor multiple times.? The cloudMinimal Api in .NET 6 Out Of Process Azure FunctionsAdam StorrWith all this talk about Minimal APIs, Adam asks can I use it with the new out-of-process Azure Functions model in .NET 6?He says "Azure Functions with HttpTriggers are similar to ASP.NET Core controller actions in that they handle http requests, have routing, can handle model binding, dependency injection etc. so how could a 'Minimal API' using Azure Functions look?"More from last weekDamien Bowden uses Azure security groups in ASP.NET Core with an Azure B2C identity provider.Jon Gallant works with the ChainedTokenCredential in the Azure Identity library.Adam Storr uses .NET 6 Minimal APIs with out-of-process Azure Functions.?? ToolsNew Improved Attach to Process Dialog ExperienceHarshada HoleWith the 2022 update, Visual Studio is improving the debugging experience—included is a new Attach to Process dialog experience.Harshada says "We have added command-line details, app pool details, parent/child process tree view, and the select running window from the desktop option in the attach to process dialog. These make it convenient to find the right process you need to attach. Also, the Attach to Process dialog is now asynchronous, making it interactive even when the process list is updating." The post walks through these updates in detail.More from last weekJeremy Likness looks at the EF Core Azure Cosmos DB provider.Harshada Hole writes about the new Attach to Process dialog experience in Visual Studio.Ben De St Paer-Gotch goes behind the scenes on Docker Desktop.Esteban Solano Granados plays with .NET 6, C# 10, and Docker.?? Design, testing, and best practicesShip / Show / Ask A modern branching strategyRouan WilsenachRouan says "Ship/Show/Ask is a branching strategy that combines the features of Pull Requests with the ability to keep shipping changes. Changes are categorized as either Ship (merge into mainline without review), Show (open a pull request for review, but merge into mainline immediately), or Ask (open a pull request for discussion before merging)."More from last weekLiana Martirosyan writes about enabling team learning and boost performance.Sagar Nangare writes about measuring user experience in modern applications and infrastructure.Neal Ford and Mark Richards talk about the hard parts of software architecture.Derek Comartin discusses event-sourced aggregate design.Steve Smith refactors to value objects.Sam Milbrath writes about holding teams accountable without micromanaging.Helen Scott asks how can you stay ahead of the curve as a developer?Rouan Wilsenach writes about a ship / show / ask branching strategy.Jeremy Miller writes about integration Testing using the IHost Lifecycle with xUnit.Net.?? Podcasts and VideosServerless Chats discusses serverless for beginners.The .NET Core Show talks about DotPurple With Michael Babienco.The Changelog talks to a lawyer about GitHub Copilot.Technology and Friends talks to Sam Basu about .NET MAUI.Visual Studio Toolbox talks about Web Live Preview.The ASP.NET Monsters talk about new Git commands.Adventures in .NET talk about Jupyter notebooks.The On .NET Show migrates apps to modern authentication and processes payments with C# and Stripe.


Solved: can't find a suitable configuration file i ...
Category: Docker

Question How do you solve the docker-com ...


Views: 917 Likes: 85
[Solved] How Resolve Suspected Database in Microso ...
Category: SQL

Question How do you remove the status of "Emergency" from the ...


Views: 168 Likes: 68
Scaling Docker Application with Docker-Compose
Category: Other

version '3.5' services petstore image petstorelates ...


Views: 0 Likes: 9
ERRO[0031] Can't add file ProjectFile to tar: arch ...
Category: Docker-Compose

Question Why is docker-compose build causing an error "ERRO[0031] Can't add file /ProjectName/z ...


Views: 0 Likes: 47
deepstack error: Docker CPU face recognize request ...
Category: Machine Learning

Question Docker CPU face recognize request with rand ...


Views: 0 Likes: 10
Can not connect to SQL server in docker container ...
Category: Docker

Problem&nbsp; &nbsp; The challenge was to connect to an SQL Server Instan ...


Views: 2004 Likes: 93
Engineering Manager at Raptor Maps | Y Combinator
Engineering Manager at Raptor Maps | Y Combinator

## Intro Do you want to be apart of a startup building large scale software for the renewable energy transition? Do you have a passion for managing multiple software teams to build high quality products and code that pushes the bar? Would you enjoy being the first engineering manager with the opportunity to shape our team, engineering practices, and software stack? As we continue to grow we're looking to hire our first engineering manager that can learn our system quickly, grow with us, enable our engineers to do their best work, help us build the team further, and set the bar for EM at Raptor Maps. ## Qualities * You consistently guide several small engineering teams to maintain a healthy & productive maker environment * You love enabling the engineering team to decide where to be along the slider of “hacky enough to test an idea quickly” versus “extremely careful that we enable systems to scale”… remember, we are a startup! * When Product provides the “what”, you're quick and excited to enable the engineers to provide the “how” and follow through to make sure the engineering team delivers * You are excited to discuss architecture and technical direction and can roll up your sleeves to build, but you are also comfortable deferring to the engineers on the implementation * You like having a tight feedback loop with your direct reports, which includes check-ins with technical leads, developers, quality engineers, and interns * You enjoy determining engineering/product/design team structure and resource allocation to best achieve company goals * You love analyzing data and empower engineers to do so when making strategic implementation tradeoffs, whether for a small feature or massive system-wide change * You can recognize and encourage engineers into using good patterns and good code hygiene techniques that are easily understood and lead to a solid code base for a growing engineering team ## Qualifications * You have 2+ years of experience as a software engineering manager. * You have 4+ years of experience as a software engineer * You have experience leading a software development team that balances multiple competing priorities * You have experience collaborating with product managers and product designers to shape requirements and designs into technical specifications ready for iterative implementation by engineers * You have experience recruiting exceptional engineers * Must be primarily located in the contiguous US * Must be authorized to work in the US ## Tech Stack * Python * Javacript with React and React Native * PostgreSQL * Amazon Web Services with a general theme or structuring system into many services to enable more technologies to be used * Docker, CircleCI, Clubhouse * ML workflow orchestrations tools * MacOS fleet ## Benefits * Healthcare with dental and vision options * Unlimited vacation policy * Full remote work with paid travel for in-person meetups * Monthly remote social events and plenty of DoorDash credit * Amazing team members that tend to love memes, pets, solar, and just generally getting out and being active


MicroServerlessLamba.NET Series – Intro: A Scalable Server-less Architecture in AWS Lambda with ASP.NET Core Microservices
MicroServerlessLamba.NET Series – Intro A Scalabl ...

Over the last 18 months I have been working with our team to develop a scalable microservice architecture in AWS Lambda.  Starting with very little experience in AWS prior to this undertaking and now feeling MUCH more comfortable with it, I wanted to summarize our journey and share it with others who may find it useful.  I would also love to hear and learn from those of you who may have feedback on our approach. First, why server-less? While containers seem to be all the rage in the software industry (and I do appreciate their utility), I do not believe they are the silver bullet many present them to be.  In our situation, we were building a greenfield solution with no dependencies to shackle us requiring a hosting mechanism that allowed for OS-level customizations or installs.  Server-less offerings were also a new cloud trend in the industry which almost everyone was using in one way or another so we definitely knew we would utilize it as well.  The more we researched it, the more we thought we could use it for everything with little to no compromise. Every adopted technology has a cost in learning and developing an expertise.  This is especially apparent with a small team.  We wanted to move fast and were looking for every opportunity to save time.  The time we saved NOT learning Docker/Kubernetes/containers/orchestration definitely helped us deliver a working solution faster. I have no doubt we will eventually leverage containers for certain use cases but we will continue to defer until we find a case where their value outweighs the costs. .NET Core The vast majority of our 10 years of code was developed in .NET 4.5 with a brief and unsuccessful detour in NodeJS.  Leadership wanted a new scalable API to extend our success with smaller customer to larger customers through API integrations with their software systems. At the time, .NET Core was just starting to gain some traction offering cross-platform execution, better performance and improved APIs, all developed in open source.  These features coupled with the easier transition from full .NET framework made it the clear choice.  We settled on using .NET Core 2.1 in Linux as our base for the architecture. Cloud Selection – Azure vs AWS We started our new development effort with a 3-4 week evaluation in both Azure and AWS.  We leveraged architect teams from each vendor to expedite our POCs and leverage their expertise to further develop our designs. We successfully developed a simplified version of our planned architecture in each cloud and evaluated the experience. This was a really valuable step in our journey.  Thankfully it was fairly easy to setup time architects for both vendors by working with our cloud representatives.  Both also provided access to their product teams for deeper questions and assistance. You should absolutely leverage these services if you are in a similar situation. In the end we decided to use AWS for a few reasons.  First our team had more AWS experience overall and were more comfortable with it.  Next Azure didn’t yet support .NET Core 2.1 and wouldn’t for another 4-6 months.  And lastly, our legacy products were already in AWS. Overall, both offered very similar capabilities/experiences and either could be an excellent choice for building a new server-less architecture. Summary In closing, this was the process we used in choosing the core platforms to build our new architecture upon.  I plan on writing a series of posts to continue sharing this journey and how we brought these platforms together. If you have feedback on this post or questions that you would like covered in future posts please leave a comment below. Cheers


docker exec -it ContainerID bash
Category: Docker

Question How do you execute commands inside Docker Container running in Azure VM?Ans ...


Views: 0 Likes: 63
Fine-tune GPT-J using an Amazon SageMaker Hugging Face estimator and the model parallel library
Fine-tune GPT-J using an Amazon SageMaker Hugging ...

GPT-J is an open-source 6-billion-parameter model released by Eleuther AI. The model is trained on the Pile and can perform various tasks in language processing. It can support a wide variety of use cases, including text classification, token classification, text generation, question and answering, entity extraction, summarization, sentiment analysis, and many more. GPT-J is a transformer model trained using Ben Wang’s Mesh Transformer JAX. In this post, we present a guide and best practices on training large language models (LLMs) using the Amazon SageMaker distributed model parallel library to reduce training time and cost. You will learn how to train a 6-billion-parameter GPT-J model on SageMaker with ease. Finally, we share the main features of SageMaker distributed model parallelism that help with speeding up training time. Transformer neural networks A transformer neural network is a popular deep learning architecture to solve sequence-to-sequence tasks. It uses attention as the learning mechanism to achieve close to human-level performance. Some of the other useful properties of the architecture compared to previous generations of natural language processing (NLP) models include the ability distribute, scale, and pre-train. Transformers-based models can be applied across different use cases when dealing with text data, such as search, chatbots, and many more. Transformers use the concept of pre-training to gain intelligence from large datasets. Pre-trained transformers can be used as is or fine-tuned on your datasets, which can be much smaller and specific to your business. Hugging Face on SageMaker Hugging Face is a company developing some of the most popular open-source libraries providing state-of-the-art NLP technology based on transformers architectures. The Hugging Face transformers, tokenizers, and datasets libraries provide APIs and tools to download and predict using pre-trained models in multiple languages. SageMaker enables you to train, fine-tune, and run inference using Hugging Face models directly from its Hugging Face Model Hub using the Hugging Face estimator in the SageMaker SDK. The integration makes it easier to customize Hugging Face models on domain-specific use cases. Behind the scenes, the SageMaker SDK uses AWS Deep Learning Containers (DLCs), which are a set of prebuilt Docker images for training and serving models offered by SageMaker. The DLCs are developed through a collaboration between AWS and Hugging Face. The integration also offers integration between the Hugging Face transformers SDK and SageMaker distributed training libraries, enabling you to scale your training jobs on a cluster of GPUs. Overview of the SageMaker distributed model parallel library Model parallelism is a distributed training strategy that partitions the deep learning model over numerous devices, within or across instances. Deep learning (DL) models with more layers and parameters perform better in complex tasks like computer vision and NLP. However, the maximum model size that can be stored in the memory of a single GPU is limited. GPU memory constraints can be bottlenecks while training DL models in the following ways They limit the size of the model that can be trained because a model’s memory footprint scales proportionately to the number of parameters They reduce GPU utilization and training efficiency by limiting the per-GPU batch size during training SageMaker includes the distributed model parallel library to help distribute and train DL models effectively across many compute nodes, overcoming the restrictions associated with training a model on a single GPU. Furthermore, the library allows you to obtain the most optimal distributed training utilizing EFA-supported devices, which improves inter-node communication performance with low latency, high throughput, and OS bypass. Because large models such as GPT-J, with billions of parameters, have a GPU memory footprint that exceeds a single chip, it becomes essential to partition them across multiple GPUs. The SageMaker model parallel (SMP) library enables automatic partitioning of models across multiple GPUs. With SageMaker model parallelism, SageMaker runs an initial profiling job on your behalf to analyze the compute and memory requirements of the model. This information is then used to decide how the model is partitioned across GPUs, in order to maximize an objective, such as maximizing speed or minimizing memory footprint. It also supports optional pipeline run scheduling in order to maximize the overall utilization of available GPUs. The propagation of activations during forward pass and gradients during backward pass requires sequential computation, which limits the amount of GPU utilization. SageMaker overcomes the sequential computation constraint utilizing the pipeline run schedule by splitting mini-batches into micro-batches to be processed in parallel on different GPUs. SageMaker model parallelism supports two modes of pipeline runs Simple pipeline – This mode finishes the forward pass for each micro-batch before starting the backward pass. Interleaved pipeline – In this mode, the backward run of the micro-batches is prioritized whenever possible. This allows for quicker release of the memory used for activations, thereby using memory more efficiently. Tensor parallelism Individual layers, ornn.Modules, are divided across devices using tensor parallelism so they can run concurrently. The simplest example of how the library divides a model with four layers to achieve two-way tensor parallelism ("tensor_parallel_degree" 2) is shown in the following figure. Each model replica’s layers are bisected (divided in half) and distributed between two GPUs. The degree of data parallelism is eight in this example because the model parallel configuration additionally includes "pipeline_parallel_degree" 1 and "ddp" True. The library manages communication among the replicas of the tensor-distributed model. The benefit of this feature is that you may choose which layers or which subset of layers you want to apply tensor parallelism to. To dive deep into tensor parallelism and other memory-saving features for PyTorch, and to learn how to set up a combination of pipeline and tensor parallelism, see Extended Features of the SageMaker Model Parallel Library for PyTorch. SageMaker sharded data parallelism Sharded data parallelism is a memory-saving distributed training technique that splits the training state of a model (model parameters, gradients, and optimizer states) across GPUs in a data parallel group. When scaling up your training job to a large GPU cluster, you can reduce the per-GPU memory footprint of the model by sharding the training state over multiple GPUs. This returns two benefits you can fit larger models, which would otherwise run out of memory with standard data parallelism, or you can increase the batch size using the freed-up GPU memory. The standard data parallelism technique replicates the training states across the GPUs in the data parallel group and performs gradient aggregation based on the AllReduce operation. In effect, sharded data parallelism introduces a trade-off between the communication overhead and GPU memory efficiency. Using sharded data parallelism increases the communication cost, but the memory footprint per GPU (excluding the memory usage due to activations) is divided by the sharded data parallelism degree, therefore larger models can fit in a GPU cluster. SageMaker implements sharded data parallelism through the MiCS implementation. For more information, see Near-linear scaling of gigantic-model training on AWS. Refer to Sharded Data Parallelism for further details on how to apply sharded data parallelism to your training jobs. Use the SageMaker model parallel library The SageMaker model parallel library comes with the SageMaker Python SDK. You need to install the SageMaker Python SDK to use the library, and it’s already installed on SageMaker notebook kernels. To make your PyTorch training script utilize the capabilities of the SMP library, you need to make the following changes Strat by importing and initializing the smp library using the smp.init()call. Once it’s initialized, you can wrap your model with the smp.DistributedModel wrapper and use the returned DistributedModel object instead of the user model. For your optimizer state, use the smp.DistributedOptimizer wrapper around your model optimizer, enabling smp to save and load the optimizer state. The forward and backward pass logic can be abstracted as a separate function and add a smp.step decorator to the function. Essentially, the forward pass and back-propagation needs to be run inside the function with the smp.step decorator placed over it. This allows smp to split the tensor input to the function into a number of microbatches specified while launching the training job. Next, we can move the input tensors to the GPU used by the current process using the torch.cuda.set_device API followed by the .to() API call. Finally, for back-propagation, we replace torch.Tensor.backward and torch.autograd.backward. See the following code @smp.step def train_step(model, data, target) output = model(data) loss = F.nll_loss(output, target, reduction="mean") model.backward(Loss) return output, loss with smp.tensor_parallelism() model = AutoModelForCausalLM.from_config(model_config) model = smp.DistributedModel (model) optimizer = smp. DistributedOptimizer(optimizer) The SageMaker model parallel library’s tensor parallelism offers out-of-the-box support for the following Hugging Face Transformer models GPT-2, BERT, and RoBERTa (available in the SMP library v1.7.0 and later) GPT-J (available in the SMP library v1.8.0 and later) GPT-Neo (available in the SMP library v1.10.0 and later) Best practices for performance tuning with the SMP library When training large models, consider the following steps so that your model fits in GPU memory with a reasonable batch size It’s recommended to use instances with higher GPU memory and high bandwidth interconnect for performance, such as p4d and p4de instances. Optimizer state sharding can be enabled in most cases, and will be helpful when you have more than one copy of the model (data parallelism enabled). You can turn on optimizer state sharding by setting "shard_optimizer_state" True in the modelparallel configuration. Use activation checkpointing, a technique to reduce memory usage by clearing activations of certain layers and recomputing them during a backward pass of selected modules in the model. Use activation offloading, an additional feature that can further reduce memory usage. To use activation offloading, set "offload_activations" True in the modelparallel configuration. Use when activation checkpointing and pipeline parallelism are turned on and the number of microbatches is greater than one. Enable tensor parallelism and increase parallelism degrees where the degree is a power of 2. Typically for performance reasons, tensor parallelism is restricted to within a node. We have run many experiments to optimize training and tuning GPT-J on SageMaker with the SMP library. We have managed to reduce GPT-J training time for an epoch on SageMaker from 58 minutes to less than 10 minutes—six times faster training time per epoch. It took initialization, model and dataset download from Amazon Simple Storage Service (Amazon S3) less than a minute, tracing and auto partitioning with GPU as the tracing device less than 1 minute, and training an epoch 8 minutes using tensor parallelism on one ml.p4d.24xlarge instance, FP16 precision, and a SageMaker Hugging Face estimator. To reduce training time as a best practice, when training GPT-J on SageMaker, we recommend the following Store your pretrained model on Amazon S3 Use FP16 precision Use GPU as a tracing device Use auto-partitioning, activation checkpointing, and optimizer state sharding auto_partition True shard_optimizer_state True Use tensor parallelism Use a SageMaker training instance with multiple GPUs such as ml.p3.16xlarge, ml.p3dn.24xlarge, ml.g5.48xlarge, ml.p4d.24xlarge, or ml.p4de.24xlarge. GPT-J model training and tuning on SageMaker with the SMP library A working step-by-step code sample is available on the Amazon SageMaker Examples public repository. Navigate to the training/distributed_training/pytorch/model_parallel/gpt-j folder. Select the gpt-j folder and open the train_gptj_smp_tensor_parallel_notebook.jpynb Jupyter notebook for the tensor parallelism example and train_gptj_smp_notebook.ipynb for the pipeline parallelism example. You can find a code walkthrough in our Generative AI on Amazon SageMaker workshop. This notebook walks you through how to use the tensor parallelism features provided by the SageMaker model parallelism library. You’ll learn how to run FP16 training of the GPT-J model with tensor parallelism and pipeline parallelism on the GLUE sst2 dataset. Summary The SageMaker model parallel library offers several functionalities. You can reduce cost and speed up training LLMs on SageMaker. You can also learn and run sample codes for BERT, GPT-2, and GPT-J on the Amazon SageMaker Examples public repository. To learn more about AWS best practices for training LLMS using the SMP library, refer to the following resources SageMaker Distributed Model Parallelism Best Practices Training large language models on Amazon SageMaker Best practices To learn how one of our customers achieved low-latency GPT-J inference on SageMaker, refer to How Mantium achieves low-latency GPT-J inference with DeepSpeed on Amazon SageMaker. If you’re looking to accelerate time-to-market of your LLMs and reduce your costs, SageMaker can help. Let us know what you build! About the Authors Zmnako Awrahman, PhD, is a Practice Manager, ML SME, and Machine Learning Technical Field Community (TFC) member at Global Competency Center, Amazon Web Services. He helps customers leverage the power of the cloud to extract value from their data with data analytics and machine learning. Roop Bains is a Senior Machine Learning Solutions Architect at AWS. He is passionate about helping customers innovate and achieve their business objectives using artificial intelligence and machine learning. He helps customers train, optimize, and deploy deep learning models. Anastasia Pachni Tsitiridou is a Solutions Architect at AWS. Anastasia lives in Amsterdam and supports software businesses across the Benelux region in their cloud journey. Prior to joining AWS, she studied electrical and computer engineering with a specialization in computer vision. What she enjoys most nowadays is working with very large language models. Dhawal Patel is a Principal Machine Learning Architect at AWS. He has worked with organizations ranging from large enterprises to mid-sized startups on problems related to distributed computing and artificial intelligence. He focuses on deep learning, including NLP and computer vision domains. He helps customers achieve high-performance model inference on SageMaker. Wioletta Stobieniecka is a Data Scientist at AWS Professional Services. Throughout her professional career, she has delivered multiple analytics-driven projects for different industries such as banking, insurance, telco, and the public sector. Her knowledge of advanced statistical methods and machine learning is well combined with a business acumen. She brings recent AI advancements to create value for customers. Rahul Huilgol is a Senior Software Development Engineer in Distributed Deep Learning at Amazon Web Services.


Docker Container AspNet Runtime DateTime is Wrong
Category: Docker

Question Why is DateTime wrong in the SQL Server when using AspNet 6 Docker Image Runtime? The t ...


Views: 0 Likes: 37
 The .NET Stacks #32: ?? SSR is cool again
The .NET Stacks #32 ?? SSR is cool again

Good morning and happy Monday! We’ve got a few things to discuss this weekThe new/old hotness HTML over the wireXamarin.Forms 5.0 released this weekQuick break how to explaining C# string interpolation to the United States SenateLast week in the .NET worldThe new/old hotness server-side renderingOver the holidays, I was intrigued by the release of the Hotwire project, from the folks at BasecampHotwire is an alternative approach to building modern web applications without using much JavaScript by sending HTML instead of JSON over the wire. This makes for fast first-load pages, keeps template rendering on the server, and allows for a simpler, more productive development experience in any programming language, without sacrificing any of the speed or responsiveness associated with a traditional single-page application.Between this and other tech such as Blazor Server, the “DOM over the wire” movement is in full force. It’s a testament to how bloated and complicated the front end has become.Obviously, rendering partial HTML over the wire isn’t anything new at all—especially to us .NET developers—and it’s sure to bring responses like “Oh, you mean what I’ve been doing the last 15 years?” As much as I enjoy the snark, it’s important to not write it off as the front-end community embracing what we’ve become comfortable with, as the technical details differ a bit—and we can learn from it. For example, it looks like instead of Hotwire working with DOM diffs over the wire, it streams partial updates over WebSocket while dividing complex pages into separate components, with an eye on performance. I wonder how Blazor Server would have been architected if this was released 2 years ago.Xamarin.Forms 5.0 released this weekThis week, the Xamarin team released the latest stable release of Xamarin.Forms, version 5.0, which will be supported through November 2022. There’s updates for App Themes, Brushes, and SwipeView, among other things. The team had a launch party. Also, David Ramel writes that this latest version drops support for Visual Studio 2017. Updates to Android and iOS are only delivered to 2019, and pivotal for getting the latest updates from Apple and Google.2021 promises to be a big year for Xamarin, as they continue preparing to join .NET 6—as this November, Xamarin.Forms evolves into MAUI (the .NET Multi-Platform App UI). This means more than developing against iPhones and Android devices, of course. With .NET 6 this also includes native UIs for iOS, Android, and desktops. As David Ramel also writes, Linux will not be supported out of the gate and VS Code support will be quite limited.As he also writes, in a community standup David Ortinau clarifies that MAUI is not a rewrite.So my hope and expectation, depending on the complexity of your projects, is you can be up and going within days … It’s not rewrites – it’s not a rewrite – that’s probably the biggest message that I should probably say over and over and over again. You’re not rewriting your application.Quick break how to explain C# string interpolation to the United States SenateDid I ever think C# string interpolation would make it to the United States Senate? No, I most certainly did not. But last month, that’s what happened as former Cybersecurity and Infrastructure Security Agency (CISA) head Chris Krebs explained a bugIt’s on page 20 … it says ‘There is no permission to {0}’. … Something jumped out at me, having worked at Microsoft. … The election-management system is coded with the programming language called C#. There is no permission to {0}’ is placeholder for a parameter, so it may be that it’s just not good coding, but that certainly doesn’t mean that somebody tried to get in there a 0. They misinterpreted the language in what they saw in their forensic audit.It appears that the election auditors were scared by something like thisConsole.WriteLine("There is no permission to {0}"); To us, we know it’s just a log statement that verifies permission checks are working. It should have been coded using one of the following lines of codeConsole.WriteLine("There is no permission to {0}", permission); Console.WriteLine($"There is no permission to {permission}"); I’m available to explain string interpolation to my government for a low, low rate of $1000 an hour. All they had to do was ask.?? Last week in the .NET world?? The Top 4Josef Ottosson works with polymorphic deserialization with System.Text.Json.Shahed Chowdhuri works with init-only setters in C# 9.Khalid Abuhakmeh writes about EF Core 5 interceptors.Over at the AWS site, the folks at DraftKings have a nice read about modernizing with .NET Core and AWS.?? AnnouncementsWinUI 3 Preview 3 has been released.David Ortinau announces the arrival of Xamarin.Forms 5.0.Microsoft Learn has a new module on learning Python.James Newton-King releases a new Microsoft doc, Code-first gRPC services and clients with .NET.Phillip Carter brings attention to a living F# coding conventions document.Patrick Svensson releases version 0.37 of Spectre.Console.The Azure SDK team released new .NET packages to simplify migrations using Newtonsoft.Json and/or Microsoft.Spatial.The EF Core team releases EFCore.NamingConventions 5.0.1, which fixes issues with owned entities and table splitting in 5.0.0.?? Community and eventsChris Noring introduces GitHub’s web dev for beginners tutorials.Niels Swimberghe rolls out two utilities written in Blazor a GZIP compressor/decompressor, and a .NET GUID generator.ErikEJ writes about some free resources for EF 5.The Xamarin community standup is a launch party for Xamarin.Forms 5.The .NET Docs Show talks to co-host David Pine about his localization project.Shahed Chowdhuri previews a new C# A-Z project and a Marvel cinematic visualization app.Chris Woodruff kicks of an ASP.NET 5 Web API blog series.VS Code Day is slated for January 27.?? Web developmentPeter Vogel writes about displaying lists efficiently in Blazor.Over at Code Maze, using the API gateway pattern in .NET to encapsulate microservices.David Fowler notes that web socket compression is coming to .NET 6.Chris Noring manages configuration in ASP.NET Core.Marinko Spasojevic signs in with Google using Angular and ASP.NET Core Web API.Damien Bowden works with Azure AD access token lifetime policy management in ASP.NET Core.Paul Michaels views server variables in ASP.NET Core.Sahan Serasinghe writes about using Web Sockets with ASP.NET Core.?? The .NET platformRichard Reedy talks about the Worker Service in .NET Core.Marco Minerva develops desktop apps with .NET 5.Jimmy Bogard works with ActivitySource and ActivityListener in .NET 5.Nikola Zivkovic introduces machine learning with ML.NET.Nick Randolph works with missing files in a multi-targeted project.Stefan Koell writes about migrating Royal TS from WinForms to .NET 5.? The cloudAndrew Lock auto-assigns issues using a GitHub Action.Richard Reedy builds a chatbot to order a pizza.Dave Brock uses the Microsoft Bot Framework to analyze emotion with the Azure Face API.Jonathan Channon uses GCP Cloud Functions with F#.Justin Yoo writes about using Azure EventGrid.Mark Heath writes about bulk uploading files to Azure Blob Storage with the Azure CLI.Daniel Krzyczkowski continues his series on writing an ASP.NET Core API secured by Azure AD B2C.Paul Michaels schedules message delivery with Azure Service Bus.?? LanguagesRick Strahl works with blank zero values in .NET number format strings.David McCarter analyzes code for issues in .NET 5.Khalid Abuhakmeh plays audio files with .NET.Daniel Bachler talks about what he wishes he knew when learning F#.Michal Niegrzybowski writes about signaling in WebRTC with Ably and Fable.Mark-James McDougall talks about why he’s learning F# in 2021.?? ToolsJason Robert creates a serverless Docker image.Stephen Cleary kicks off a series around asynchronous messaging.Michal Bialecki recaps useful SQL statements when writing EF Core migrations.Derek Comartin splits up a monolith into microservices.Frank Boucher creates a CI/CD deployment solution for a Docker project.Alex Orlov writes about using TLS 1.3 for IMAP and SMTP connections through Mailbee.NET.Brad Beggs writes about using vertical rulers in VS Code.Tim Cochran writes about maximizing developer effectiveness.?? XamarinDavid Ramel writes how Xamarin.Forms won’t be on Linux or VS Code for MAUI in .NET 6, and also mentions that Xamarin.Forms 5 is dropping Visual Studio 2017 support.Leomaris Reyes writes about Xamarin Essentials.Anbu Mani works with infinite scrolling in Xamarin.Forms.Matthew Robbins embeds a JS interpreter into Xamarin apps with Jint.?? PodcastsScott Hanselman talks to Amanda Silver about living through 2020 as a remote developer.The 6-Figure Developer Podcast talks with Phillip Carter about F# and functional programming.The Azure DevOps Podcast talks with Sam Nasr about SQL Server for developers.?? VideosVisual Studio Toolbox talks about the Azure App Insights Profiler.The ASP.NET Monsters talk with Andrew Stanton-Nurse.Gerald Versluis secures a Xamarin app with fingerprint or face recognition.James Montemagno makes another Xamarin.Forms 101 video.At Technology and Friends, David Giard talks to Javier Lozano about virtual conferences.Jeff Fritz works on ASP.NET Core MVC and also APIs with ASP.NET Core.ON.NET discusses cross-platform .NET development with OmniSharp.


Could not execute because the application was not ...
Category: .Net 7

Question How do you solve for docker-compose up" error that says "Could not execute because the ...


Views: 8 Likes: 59
yaml.parser.ParserError: while parsing a block map ...
Category: Docker-Compose

Question Docker-Compose.yaml error ya ...


Views: 4 Likes: 28
How to create PDF Signature Box using iTextSharp i ...
Category: .NET 7

&nbsp;I can suggest some popular libraries that might fit your needs 1. **iTextSharp** ...


Views: 0 Likes: 0
Solved!! Docker-Compose SQL Server database persis ...
Category: Docker

Problem There is nothing like losing data in the SQL Server Docker Container af ...


Views: 741 Likes: 101
MSBUILD : error MSB1009: Project file does not exi ...
Category: .Net 7

Error building Dot Net Core 2.2 Docker Image from a Docker File <span style="background-c ...


Views: 8821 Likes: 166
ls: error while loading shared libraries: libpcre2 ...
Category: Docker-Compose

Question Docker exec -it ContainerID bash &amp;&amp; ls#emits an error saying " ...


Views: 0 Likes: 41
pull access denied for microsoft/mssql-server-lin ...
Category: Docker-Compose

Question Why is this error happening? " pull access denied for microsoft/mssql-server-linux, rep ...


Views: 0 Likes: 49
Insight Summer Session 2020 Update
Insight Summer Session 2020 Update

Insight recently completed its first totally remote session. This summer, we hosted more than 350 Fellows from across 33 different U.S. states, Canada, and around the world. As they move on to the post-session experience and continue to interview with hiring companies, we thought we’d share a recap of their first 7-weeks with Insight.The Remote ExperienceWhile this was the first session for Insight in which all Fellows participated remotely, we’ve been offering a remote data science program since 2015. We’re happy to report that we successfully adapted what we’ve learned during that time to all 7 of our programs across all locations. While certain aspects of the remote experience looked a bit different than the on-site offering, many remained the same, and there are also several benefits to a remote program that a more restrictive in-person session simply can’t offer. Insight continues to provide Fellows with full support from our teamsProgram Directors work directly with Fellows to provide expert mentorship, advising on project selection, development, and demoing with hiring companies.Program Operations Team provides expertise for managing the crucial program operations (communications, planning, logistics, tracking, etc.) necessary to ensure the success of each Fellow.Coaching & Development Team provides high-quality training, coaching, and feedback via workshops and small group sessions, helping Fellows identify the best roles at this stage of their careers and build the self-awareness and confidence to communicate their unique value-add.Interview Strategy Team is fully dedicated to the post-session experience, providing support and mentorship services through dedicated training and coaching groups in service of helping Fellows build the bridge to thriving careers.Partnerships Team works directly with hiring companies to understand their specific needs and secure exciting opportunities for our Fellows across all programs and locations.Kim Vo (Coaching & Development Lead, Interview Strategy Team) and Emily Kearney (Program Director, Data Science) during the fall session, September 2019.Fellows benefit from the same small cohort sizes that the in-person Fellowship offered. None of our program cohorts have grown beyond 30, with the average being about 20 Fellows. Technical advisor and alumni mentoring meetings provide opportunities for even smaller groups to gain feedback and advice, and personalized interview preparation is conducted on a one-on-one basis, tailored to relevant skills required for that company, and based on the existing skills of the individual Fellow.Fellows now also enjoy the technical advantages of using industry-standard distributed tools such as presentations via Zoom, internal messaging platforms (like Slack), and Github to contribute to a shared codebase. The remote session enabled Fellows’ increased flexibility and a more expansive reach. This summer, Fellows reported that they enjoyed the ease of meeting with other Fellows and Insight staff members every day without the wasted time of having to commute. Participating remotely also meant that Fellows were better able to set their own work times, instead of the more strict 900am?—?600pm that is required during the in-person session. The remote experience made it much easier to connect and work with Fellows from other locations, and expanded access to Industry Leader Mentor presentations that are generally restricted to Fellows located in the city where the mentor visit is hosted.Industry Leader Mentor SessionsAn exciting aspect of Insight’s Fellows Programs is the opportunity to participate in Industry Leader Mentoring Sessions, in which some of the tech industry’s top professionals visit with Fellows to speak about their experiences, share perspectives, and answer questions.hey @jakeklamka, thank you for having me last week for an AMA with your @InsightFellows cohort! pic.twitter.com/6QEL8WW1yHDuring the summer session, Insight was proud to host an impressive lineup of industry leaders, which includedAlexis Ohanian, the co-founder and managing partner of Initialized Capital. He was the co-founder, and later Executive Chairman, of Reddit. Ohanian is also the best selling author of Without Their Permission.Peter Norvig, a Director of Research at Google. He was previously head of Google’s core search algorithms group, and of NASA Ames’s Computational Sciences Division, making him NASA’s senior computer scientist. Norvig is co-author of Artificial Intelligence A Modern Approach.DJ Patil, who served as the first Chief Data Scientist of the United States Office of Science and Technology Policy, and is currently the Head of Technology at Devoted Health. Patil is credited with coining the term “data science”.Hilary Mason, the co-founder of Hidden Door, founder of Fast Forward Labs, a machine intelligence research company, and the Data Scientist in Residence at Accel. Mason was previously the Chief Scientist at bitly and co-founder of hackNY.Solomon Hykes, the Founder, Chief Technology Officer and Chief Architect of Docker and the creator of the Docker open source initiative. In his role at Docker, Solomon is focused on building a platform for developers and system administrators to build, ship, run and orchestrate distributed applications.Drew Conway, the Senior Vice President at Two Sigma. He was previously the founder and CEO of Alluvium, and is a leading expert in the application of computational methods to social and behavioral problems at large-scale.Chris Wiggins, the Chief Data Scientist at The New York Times, and an associate professor of applied mathematics at Columbia University. At Columbia, Wiggins is a founding member of the executive committee of the Data Science Institute, and of the Department of Systems Biology. He is also a co-founder and co-organizer of hackNY.Anne Bauer, the Director of Data Science at The New York Times, leading the team in charge of algorithmic content recommendations. Bauer is also an alum of the 2015 Insight Data Science Fellowship Program.Wes McKinney, an open source software developer focusing on data analysis tools. He created the Python pandas project and is a co-creator of Apache Arrow. McKinney is currently a director at Ursa Labs, a member of The Apache Software Foundation, and a PMC member for Apache Parquet.Diversity, Equity, and Inclusion SummitThis session, Insight hosted a Diversity, Equity, and Inclusion (DEI) Summit, with the goal of facilitating discussions to share insights, challenge perspectives, pose stimulating questions and build community with each other so that Fellows are better able to enact positive change in their new roles. This event brought together Insight Fellows, staff, and thought leaders from our community to discuss a range of social issues that impact how we work together, and society at-large.Lightning talks from Insight staff, an alumni panel, and a keynote speaker were utilized to spark conversation for discussing important?—?and sometimes difficult?—?topics that we don’t often have the chance to talk about candidly in our day-to-day work; topics that nevertheless tie into how we experience the workplace and interact with each other.The event IncludedLightning Talks with Insight Program Directors, who discussed a toolkit for recognizing and addressing your unconscious bias, and reflecting on anti-Black violence.Alumni Panel with Insight alumni Melecia Wright & Che Smith, who discussed the role of companies and technology in building a more equitable and socially just future.Keynote Speaker Brandeis Marshall presented, Deepfake Technology The (Mis-) Representation of Data. In this session, Dr. Marshall described deepfake technology and discussed issues related to ethics, privacy, and accountability for companies who use “black box” algorithms to make decisions and the respective societal impacts.https//medium.com/media/4b102d85b0c521351e09d68537aefb73/hrefAs Fellows prepare to enter the job market and join a professional community, we want to help them think critically about how they can contribute to creating more diverse, equitable, and inclusive work environments. Below are a few reactions that Fellows shared about the event“Thank you to every person for exposing your vulnerability to such a large group. It really helped me to engage with the conversation because I felt safe there.”“…I have attended many diversity training workshops in my schools and my previous job. However, this was the best and most genuine of them all. I can see all the effort put in it. Thank you!”“I am so grateful to Insight for letting us pause for a moment and think about our mission and responsibilities to make this world a better place through science, tech, education, communication, and reaching out to the congress to hear our voices. And most of all, I found a group of inspiring women during our discussion time through Insight platform.”Fellow SpotlightsWe have been working with so many incredibly talented Fellows this summer, and spotlighted a few to reflect their wide range of backgrounds, special skills, and experience.Top row (left to right) Jayakrishnan Parappalliyali, Paige Chang, Cameron Jones; Middle row (left to right) Karen Larson, Javed, Jaghai, Saba Khalid; bottom row (left to right) Justin Kaseman, Doaa Altarawy, Sarah UludagJayakrishnan Parappalliyali, DevOps Fellow from San Francisco. At Insight, he built an autonomous stateful application in a cost-efficient cloud infrastructure.Paige Chang, Artificial Intelligence Fellow from NYC. During the session, she applied a cutting-edge approach (reinforcement learning) to a classic data problem (song recommendation), all deployed using AWS and Tensorflow.Cameron Jones, Data Science Fellow in San Francisco. Cameron built an app for streaming positive, relevant news for Black audiences by scraping and processing articles through Natural Language Processing (Google’s BERT model).Saba Khalid, Data Engineering Fellow from New York City. For her project, Saba built an efficient data pipeline to ETL Federal Elections Committee (FEC) data to map changing trends in individual campaign contributions.Karen Larson, Health Data Science Fellow from Boston. Karen has been consulting for University Hospitals, building a tool to help doctors avoid ordering unnecessary lymph node biopsies for melanoma patients.Javed Jaghai, Data Science Fellow based in Washington D.C. Javed leveraged a deep learning RNN model to build the world’s first two-way translator between English and Jamaican Creole.Justin Kaseman, Decentralized Consensus Fellow based in the Bay Area. At Insight, Justin built a tool to easily bootstrap and deploy a decentralized application to help bring new developers into the space.Doaa Altarawy, Data Science Fellow in Toronto. As a Fellow, Doaa applied her strong data and engineering skills to an incredibly timely project screening chest x-rays for quick detection of COVID-19, using fastai’s deep neural net model.Sarah Uludag, Data Engineering Fellow from Los Angeles. Sarah built a tool for finding and marking public keys used in Bitcoin transactions on the Dark Web using Apache Spark, Cassandra, and Neo4J.Post Session ExperienceThe initial 7-week program of the Fellowship has come to an end, but Insight’s work with Fellows has only just begun. Fellows spent the session building projects and demoing their work, and are now engaged in interviewing with hiring companies. As of August 28, there are 356 opportunities from 238 companies, and that list continues to grow. This is a difficult time for the economy, but we’re happy to report that our current growth rate of hiring opportunities is on par with our summer session in 2019. Companies currently interviewing Fellows include Netflix, Amazon, Facebook, Google, Yelp, Apple, Pinterest, The New York Times, Johnson & Johnson, Humana, CVS, ProtonMail, Bolt Labs, and many more!Insight’s Interview Strategy Team is currently working with our Fellows on an individual level to provide personalized interview preparation support and mentorship services. We’re investing in our Fellows to honor our guarantee and ensure they’re hired quickly to launch exciting new careers in tech.The Insight GuaranteeLooking ahead to our Fall 2020 SessionInsight is currently selecting our next Fellow cohort for the fall session, and continuing to work to help reduce the barriers that keep potential Fellows from participating in Insight’s programs, taking those first steps to launch their thriving careers.ScholarshipsWe’re expanding our current offering of scholarships beyond our Need-based Scholarship, which has been available for several years to help fund essential day-to-day living needs related to the program. Starting this fall, we will now offer two additional scholarshipsScholarship for Underrepresented Minority Groups?—?This fund has been established to help remove barriers for our Fellows from racial and ethnic backgrounds that are traditionally underrepresented in tech. Eligible Fellows will have the opportunity to complete a simple application process in order to be considered, and those selected will receive up to $3000.Gender Diversity in Tech Scholarship?—?Insight is proud to continue our partnership with Clover Health, who is sponsoring this scholarship fund to help remove barriers based on gender in the tech industry. Eligible Fellows who complete the application process will be considered for a $5000 scholarship.Mentorship OpportunitiesThis fall, Insight is also piloting a mentorship program for candidates who identify as one or more historically underrepresented groups in tech (American Indian, Black/African American, Hispanic, Latinx, or Native Hawaiian/Pacific Islander). The purpose of this mentoring program is to provide an opportunity for applicants to ask questions about the interview process, program, post-program experience, and workforce more broadly. Many applicants do not have existing connections to the Insight network that they can turn to for information that may be available to other applicants. As we learn from this pilot program, we’ll continue to improve upon the offering with the goal of providing a more inclusive experience for all.Insight’s core value is to put Fellows first, and we are, first and foremost, committed to the long-term success of our Fellows. As we plan for the future, we’re excited to not only reach the same impressive outcomes, but to exceed them. We’ll continue to set a high bar for what constitutes success within the program, and maintain the guarantee of a job in a relevant field, earning at least a $100,000 salary within 6 months of the end of the program. In addition to our guarantee, we’re committed to improving upon and expanding our accessibility to an even broader audience.Are you ready to make a change & transition to a career in tech? Sign up to learn more about Insight Fellows programs and start your application today.This post was revised on August 28, 2020 to update the number of company opportunities.Insight Summer Session 2020 Update was originally published in Insight on Medium, where people are continuing the conversation by highlighting and responding to this story.


How to Change the Time Zone of SQL Server from Doc ...
Category: Docker-Compose

Question How do you change the time from UTC to <a class='text-decoration-none' href='https//w ...


Views: 0 Likes: 33
System.InvalidOperationException: The view 'Index' ...
Category: .Net 7

Question How do you resolve this error "System.InvalidOperationException The v ...


Views: 0 Likes: 49
Dew Drop – June 21, 2023 (#3969)
Dew Drop – June 21, 2023 (#3969)

Top Links Introducing the New T4 Command-Line Tool for .NET (Mike Corsaro) How to use GitHub Copilot Prompts, tips, and use cases (Rizel Scarlett) How to Hide Your Angular Properties – # vs private Explained (Deborah Kurata) Improved .NET Debugging Experience with Source Link (Patrick Smacchia) 7 Things about C# Running Apps (Joe Mayo) Web & Cloud Development Run OpenTelemetry on Docker (B. Cameron Gain) How Much Will It Hurt? The 10 Things You Need to Do to Migrate Your MVC/Web API App to ASP.NET Core (Peter Vogel) Node v16.20.1 (LTS) and Node v20.3.1 (Current) and Node v18.16.1 (LTS) (Rafael Gonzaga) Service to check if application browser tab is active or not (silfversparre) New W3C website deployed (Coralie Mercier) How to persist Postman variables (Joyce) Dependent Stack Updates with Pulumi Deployments (Komal Ali) Detecting Scene Changes in Audiovisual Content (Avneesh Saluja, Andy Yao & Hossein Taghavi) Exploring the Exciting New Features of TypeScript 5.0 and 5.1 (Suprotim Agarwal) What is an API endpoint? (Postman Team) WinUI, .NET MAUI & XAML .NET MAUI + GitHub Actions + Commas in Certificate Names (Mitchel Sellers) Visual Studio & .NET Integer compression Implementing FastPFor decoding in C# (Oren Eini) Permutations of a String in C# (Matjaz Prtenjak) Using StringBuilder To Replace Values (Khalid Abuhakmeh) Create your own Mediator (like Mediatr) (Steven Giesel) Microsoft Forms Service’s Journey to .NET 6 (Ray Yao) Why is Windows using only even-numbered processors? (Raymond Chen) JetBrains Toolbox App 2.0 Beta Streamlines Installation and Improves Integrations (Victor Kropp) Design, Methodology & Testing One critical skill for a Scrum Master and why? (Martin Hinshelwood) Top 6 AI Coding Assistants in 2023 (Fimber Elemuwa) Big-O Notation and Complexity Analysis (Kirupa Chinnathambi) Cleaning up files changed by a GitHub Action that runs in a container (Rob Bos) To improve as an engineer, get better at requesting (and receiving) feedback (Chelsea Troy) Mobile, IoT & Game Development Get started developing mixed reality for Meta Quest 3 with Unity (Kevin Semple) Screencasts & Videos Technology & Friends – Alex Mattoni on Cycle.io (David Giard) FreeCodeSession – Episode 463 (Jason Bock) What I Wish I Knew… about interviewing for jobs (Leslie Richardson) Podcasts CodeNewbie S24E7 – Navigating Layoffs with Intention (Natalie Davis) (CodeNewbie Team) The Rework Podcast – Buckets of Time (Jason Fried & David Heinemeier Hansson) What It Takes To Be A Web Developer Part 2 – JavaScript Jabber 587 (AJ O’Neal & Dan Shappir) Python Bytes Podcast #341 – Shhh – For Secrets and Shells (Michael Kennedy) Tools and Weapons Podcast – First Vice President Nadia Calviño Architecting Spain’s AI future (Brad Smith) RunAs Radio – Windows Update for Business with Aria Carley (Richard Campbell) Defense Unicorns, A Podcast – Learning from Your Peers with Tracy Gregorio (Rob Slaughter) Community & Events Juneteenth Conference Comes to Chicago (David Giard) Celebrating Tech Trailblazers for Juneteenth (Daniel Ikem) Stack Overflow’s 2023 developer survey Are developers using AI? (Esther Shein) What Does Gen Z Want at Work? The Same Things You Wanted Once Upon a Time (Katie Bartlet) Meet the Skilling Champion Priyesh Wagh (Rie Moriguchi) Things to Do in Philadelphia This Week & Weekend (Visit Philly) The Next Phase of Eleventy Return of the Side Project (Zach Leatherman) Database SQL SERVER – Resolving Deadlock by Accessing Objects in the Same Order (Pinal Dave) The Right Tools for Optimizing Azure SQL Managed Instance Performance (Rie Merritt) Latest features in Azure Managed Instance for Apache Cassandra (Theo van Kraay) T-SQL Tuesday #163 – Career Advice I received (Tracy Boggiano) Miscellaneous Electronic Signatures 2023 Legal Aspects (Bjoern Meyer) Releasing Windows 11 Build 22621.1926 to the Release Preview Channel (Brandon LeBlanc) Windows 11 Moment 3 Heads to the Release Preview Channel (Paul Thurrott) Microsoft CEO Satya Nadella and many Xbox executives are set to defend its FTC case (Tom Warren) More Link Collections The Morning Brew #3731 (Chris Alcock) Sands of MAUI Issue #108 (Sam Basu) Daily Reading List – June 20, 2023 (#107) (Richard Seroter) The Geek Shelf  Learn WinUI 3 (Alvin Ashcraft)


latest could not be accessed on a registry to reco ...
Category: Docker

Question Deploying&nbsp; Docker Swarm shows the following message "<span style="background-color ...


Views: 0 Likes: 53
MSB1009: Docker File Error when building Asp.Net 5
Category: .Net 7

The most common Error when building a Docker File for Asp.Net Cor ...


Views: 586 Likes: 67
Micro-service application running inside Docker Co ...
Category: Docker

Problem <s ...


Views: 137 Likes: 71
running dotnet in docker exec /usr/bin/dotnet: no ...
Category: .NET 5

Question Building and Running dotnet 8 in Docker Container throws an error" /usr/bin/dot ...


Views: 0 Likes: 17
"WaitForWarmupCompletion" task was not given a val ...
Category: Docker

Question How do you resolve "WaitForWarmupCompletion" task was not given a value for the requir ...


Views: 0 Likes: 28
The Compose file './docker-compose.yml' is invali ...
Category: Docker

Question What is going on with the docker-compose.yml file? it is throwing an error saying&nbsp; ...


Views: 0 Likes: 50
Failed to restart docker.service: Unit docker.serv ...
Category: Questions

Question What is going on with systemctl and docker error "Failed to restart docker.service Uni ...


Views: 0 Likes: 45
How to install pyenv on Linux 22.04
Category: SERVERS

<div class="js-voting-container d-flex jc-center fd-colum ...


Views: 0 Likes: 47
There is no remote user 'distributor_admin' mapped ...
Category: SQL

Question SQL Server Replication error "There is no remote user 'distributor_admin' mapped to loc ...


Views: 0 Likes: 36
How to Bind an External Folder located on the Host ...
Category: Docker

How to Bind an External Folder located on the Host Computer to Docker Container with --no ...


Views: 1073 Likes: 93
docker-compose 404 page not found AspNet running i ...
Category: Docker

Question Why am I getting a 404 page not found in Docker Asp.Net Core Applicati ...


Views: 0 Likes: 65
[Solved: Unauthorized WOPI host.]: Failed to estab ...
Category: DOCKER

Question How do you solve Next Cloud-All-in-One error that says&nbsp;"Failed to establish ...


Views: 0 Likes: 9
Docker on Windows Error "A fatal error was encount ...
Category: Docker

Question How do you solve Docker on Windows Error when you are trying to run AspNet Application ...


Views: 223 Likes: 80
 The .NET Stacks #33: ?? A blazing conversation with Steve Sanderson
The .NET Stacks #33 ?? A blazing conversation wi ...

Happy Monday, all. What did you get NuGet for its 10th birthday?This weekMicrosoft blogs about more .NET 5 improvementsA study on migrating a hectic service to .NET CoreMeet Jab, a new compile-time DI libraryDev Discussions Steve SandersonLast week in the .NET worldMicrosoft blogs about more .NET 5 improvementsThis week, Microsoft pushed a few more blog posts to promote .NET 5 improvements Sourabh Shirhatti wrote about diagnostic improvements, and Mána Píchová writes about .NET networking improvements.Diagnostic improvementsWith .NET 5, the diagnostic suite of tools does not require installing them as .NET global tools—they can now be installed without the .NET SDK. There’s now a single-file distribution mechanism that only requires a runtime of .NET Core 3.1 or higher. You can check out the GitHub repo to geek out on all the available diagnostics tools. In other news, you can now perform startup tracing from EventPipe as the tooling can now suspend the runtime during startup until a tool is connected. Check out the blog post for the full treatment.Networking improvementsIn terms of .NET 5 networking improvements, the team added the ability to use cancellation timeouts from HttpClient without the need for a custom CancellationToken. While the client still throws a TaskCanceledException, the inner exception is a TimeoutException when timeouts occur. .NET 5 also supports multiple connections with HTTP/2, a configurable ping mechanism, experimental support for HTTP/3, and various telemetry improvements. Check out the networking blog post for details. It’s a nice complement to Stephen Toub’s opus about .NET 5 performance improvements.A study on migrating a hectic service to .NET CoreThis week, Avanindra Paruchuri wrote about migrating the Azure Active Directory gateway—and its 115 billion daily requests—over to .NET Core. While there’s nothing preventing you hosting .NET Framework apps in the cloud, the bloat of the framework often leads to expensive cloud spend.The gateway’s scale of execution results in significant consumption of compute resources, which in turn costs money. Finding ways to reduce the cost of executing the service has been a key goal for the team behind it. The buzz around .NET Core’s focus on performance caught our attention, especially since TechEmpower listed ASP.NET Core as one of the fastest web frameworks on the planet.In Azure AD gateway’s case, we were able to cut our CPU costs by 50%. As a result of the gains in throughput, we were able to reduce our fleet size from ~40k cores to ~20k cores (50% reduction) … Our CPU usage was reduced by half on .NET Core 3.1 compared to .NET Framework 4.6.2 (effectively doubling our throughput).It’s a nice piece on how they were able to gradually move over and gotchas they learned along the way.Meet Jab, a new compile-time DI libraryThis week, Pavel Krymets introduced Jab, a library used for compile-time dependency injection. Pavel works with the Azure SDKs and used to work on the ASP.NET Core team. Remember a few weeks ago, when we said that innovation in C# source generators will be coming in 2021? Here we go.From the GitHub readme, it promises fast startup (200x more than Microsoft.Extensions.DependencyInjection), fast resolution (a 7x improvement), no runtime dependencies, with all code generating during project compilation. Will it run on ASP.NET Core? Not likely, since ASP.NET Core is heavily dependent on the runtime thanks to type accessibility and dependency discovery, but Pavel wonders if there’s a middle ground.Dev Discussions Steve SandersonIt seems like forever ago when, at NDC Oslo in 2017, Steve Sanderson showed off a new web UI framework with the caveat “an experiment, something for you to be amused by.” By extending Dot Net Anywhere (DNA), Chris Bacon’s portable .NET runtime, on WebAssembly, he was able to load and run C# in the browser. In the browser!Of course, this amusing experiment has grown into Blazor, a robust system for writing web UIs in C#. I was happy to talk to Steve Sanderson about his passions for the front-end web, how far Blazor has come, and what’s coming to Blazor in .NET 6.Years ago, you probably envisioned what Blazor could be. Has it met its potential, or are there other areas to focus on?We’re not there yet. If you go on YouTube and find the first demo I ever did of Blazor at NDC Oslo in 2017, you’ll see my original prototype had near-instant live reloading while coding, and the download size was really tiny. I still aspire to get the real version of Blazor to have those characteristics. Of course, the prototype had the advantage of only needing to do a tiny number of things—creating a production-capable version is 100x more work, which is why it hasn’t yet got there, but has of course exceeded the prototype vastly in more important ways.Good news though is that in .NET 6 we expect to ship an even better version of live-updating-while-coding than I had in that first prototype, so it’s getting there!When looking at AOT, you’ll see increased performance but a larger download size. Do you see any other tradeoffs developers will need to consider?The mixed-mode flavour of AOT, in which some of your code is interpreted and some is AOT, allows for a customizable tradeoff between size and speed, but also includes some subtleties like extra overhead when calling from AOT to interpreted code and vice-versa.Also, when you enable AOT, your app’s publish time may go up substantially (maybe by 5-10 minutes, depending on code size) because the whole Emscripten toolchain just takes that long. This wouldn’t affect your daily development flow on your own machine, but likely means your CI builds could take longer.It’s still quite impressive to see the entire .NET runtime run in the browser for Blazor Web Assembly. That comes with an upfront cost, as we know. I know that the Blazor team has done a ton of work to help lighten the footprint and speed up performance. With the exception of AOT, do you envision more work on this? Do you see a point where it’ll be as lightweight as other leading front-end frameworks, or will folks need to understand it’s a cost that comes with a full framework in the browser?The size of the .NET runtime isn’t ever going to reduce to near-zero, so JS-based microframeworks (whose size could be just a few KB) are always going to be smaller. We’re not trying to win outright based on size alone—that would be madness. Blazor WebAssembly is aimed to be maximally productive for developers while being small enough to download that, in very realistic business app scenarios, the download size shouldn’t be any reason for concern.That said, it’s conceivable that new web platform features like Signed HTTP Exchanges could let us smartly pre-load the .NET WebAssembly runtime in a browser in the background (directly from some Microsoft CDN) while you’re visiting a Blazor WebAssembly site, so that it’s instantly available at zero download size when you go to other Blazor WebAssembly sites. Signed HTTP Exchanges allow for a modern equivalent to the older idea of a cross-site CDN cache. We don’t have a definite plan about that yet as not all browsers have added support for it.Check out the entire interview at my site.?? Last week in the .NET world?? The Top 3Andrew Lock introduces the ASP.NET Core Data Protection system.Maarten Balliauw writes about building a friendly .NET SDK.Josef Ottosson writes an Azure Function to zip multiple files from Azure Storage.?? AnnouncementsShelley Bransten announces Microsoft Cloud for Retail.Christopher Gill celebrates NuGet’s 10th birthday.Tara Overfield releases the January 2021 Security and Quality Rollup Updates for .NET Framework, and Rahul Bhandari writes about the .NET January 2021 updates..NET 6 nightly builds for Apple M1 are now available.The Visual Studio team wants your feedback on Razor syntax coloring.?? Community and eventsThe .NET Docs Show talks to Luis Quintanilla about F#.Pavel Krymets introduces Jab, a compile-time DI container.The Entity Framework Standup talks about EF Core 6 survey results, and the Languages & Runtime standup discusses plans for .NET 6 and VB source generators.Sarah Novotny writes about 4 open source lessons for 2021.IdentityServer v5 has shipped.Khalid Abuhakmeh rethinks OSS attribution in .NET.TechBash 2021 is slated for October 19-22, 2021.?? Web developmentDave Brock builds a “search-as-you-type” box in Blazor.Cody Merritt Anhorn uses localization with Blazor.Changhui Xu uploads files with Angular and .NET Web API.Mark Pahulje uses HtmlAgilityPack to get all emails from an HTML page.Jon Hilton uses local storage with Blazor.Anthony Giretti tests gRPC endpoints with gRPCurl, and also explores gRPCui.The folks at Uno write about building a single-page app in XAML and C# with WebAssembly.Marinko Spasojevic handles query strings in Blazor WebAssembly.Daniel Krzyczkowski continues building out his ASP.NET Core Web API by integrating with Azure Cosmos DB.?? The .NET platformSean Killeen describes the many flavors of .NET.Mattias Karlsson writes about his boilerplate starting point for .NET console apps.David Ramel delivers a one-stop shop for .NET 5 improvements.Sam Walpole discusses writing decoupled code with MediatR.Sourabh Shirhatti writes about diagnostics improvements with .NET 5.Mána Píchová writes about .NET 5 networking improvements.? The cloudAvanindra Paruchuri writes about migrating the Azure AD gateway to .NET Core.Johnny Reilly works with Azure Easy Auth.Muhammed Saleem works with Azure Functions.Chris Noring uses Azure Key Vault to manage secrets.Bryan Soltis posts a file to an Azure Function in 3 minutes.Damian Brady generates a GitHub Actions workflow with Visual Studio or the dotnet CLI.Thomas Ardal builds and tests multiple .NET versions with GitHub Actions.Dominique St-Amand works with integration tests using Azure Storage emulator and .NET Core in Azure DevOps.Aaron Powell uses environments for approval workflows with GitHub Actions.Damien Bowden protects legacy APIs with an ASP.NET Core YARP reverse proxy and Azure AD Auth.?? LanguagesKhalid Abuhakmeh writes about Base64 encoding with C#.Franco Tiveron writes about a developer’s C# 9 cheat sheet.Bruno Sonnino uses C# to convert XML data to JSON.Jacob E. Shore writes about his first impressions of F#.Matthew Crews writes about learning resources for F#.Mark-James McDougall writes an iRacing SDK implementation in F#.?? ToolsElton Stoneman writes about understanding Microsoft’s Docker images for .NET apps.Jon P. Smith writes about updating many-to-many relationships in EF Core 5 and above.Ruben Rios writes about a more integrated terminal experience with Visual Studio.Benjamin Day writes about tests in Visual Studio for Mac.The folks at Packt write about DAPR.Peter De Tender publishes Azure Container Instances from the Docker CLI.Nikola Zivkovic writes about linear regression with ML.NET.Patrick Smacchia writes how NDepend used Resharper to quickly refactored more than 23,000 calls to Debug.Assert().Mark Heath discusses his plans for NAudio 2.Michal Bialecki asks is Entity Framework Core fast?Jon P. Smith introduces a library to automate soft deletes in EF Core.?? XamarinLeomaris Reyes introduces UX design with Xamarin Forms.Charlin Agramonte writes about XAML naming conventions in Xamarin.Forms.Leomaris Reyes works with the Infogram in Xamarin.Forms 5.0.Rafael Veronezi previews XAML UIs.James Montemagno writes about how to integrate support emails in mobile apps with data and logs.Leomaris Reyes writes about the Xamarin.Forms File Picker.?? Design, testing, and best practicesSteve Gordon writes about how to become a better developer by asking questions.Derek Comartin says start with a monolith, not microservices.Stephen Cleary writes about durable queues.?? PodcastsScott Hanselman explores event modeling with Adam Dymitruk.At Working Code podcast, a discussion on monoliths vs. microservices.The .NET Rocks podcast checks in on IdentityServer.The .NET Core Show talks Blazor with Chris Sainty.The 6-Figure Developer podcast talks to Christos Matskas about Microsoft Identity.?? VideosThe ON.NET Show inspects application metrics with dotnet-monitor, works on change notifications with Microsoft Graph, and inspects application metrics with dotnet-monitor.Scott Hanselman shows you what happens when after you enter a URL in your browser.The ASP.NET Monsters talk about migrating their site to Azure Blob Storage..At Technology and Friends, David Giard talks to Mike Benkovich about GitHub Actions and Visual Studio.


Docker on Raspberry Pi Error .Net Core 3.1 Error: ...
Category: Docker

How do you resolve this error Docker on Raspberry Pi Error .Net Core 3.1 Error ...


Views: 924 Likes: 119
docker-compose build: Can't find a sutable configu ...
Category: Linux

Question What is going on, Docker got installed when I installed a Linuxe Ubunt ...


Views: 19 Likes: 65
Unhandled exception. System.DllNotFoundException: ...
Category: Network

Problem Unhandled exception. System.DllNotFoundException Unable to load shared ...


Views: 1207 Likes: 99
Master not discovered yet, this node has not previ ...
Category: Research

Question How do you solve the Kibana Error Master not discovered yet, this nod ...


Views: 398 Likes: 63
How to Install ElasticSearch in Docker Container
Category: Docker

1. Make sure you create a Mount Volume&nbsp; &nbsp;- This volume does not really matter if ...


Views: 0 Likes: 58
Docker compose error while creating mount source p ...
Category: Docker

Question How do you solve this error Cannot start service web error while creating mount sourc ...


Views: 380 Likes: 75
Announcing the availability of Feathr 1.0
Announcing the availability of Feathr 1.0

This blog is co-authored by Edwin Cheung, Principal Software Engineering Manager and Xiaoyong Zhu, Principal Data Scientist.Feathr is an enterprise scale feature store, which facilitates the creation, engineering, and usage of machine learning features in production. It has been used by many organizations as an online/offline store, as well as for real-time streaming.Today, we are excited to announce the much-anticipated availability of the OSS Feathr 1.0. It contains many new features and enhancements since Feathr became open-source one year ago. Similar to the online transformation, rapid sandbox environment, MLOPs V2 accelerator integration really accelerates the development and deployment of machine learning projects at enterprise scale.Online transformation via domain specific language (DSL)In various machine learning scenarios, features generation is required for both training and inferences. There is a limitation where data source cannot come from online service, as currently transformation only happens before feature data is published to the online store and the transformation is required close to real-time. In such cases, there is a need for a mechanism where the user has the ability to run transformation on the inference data dynamically before inferencing via the model. The new online transformation via DSL feature addresses these challenges by using a custom transformation engine that can process transformation requests and responses close to real-time on demand. It allows definition of transformation logic declaratively using DSL syntax which is based on EBNF. It also provides extensibility, where there is a need to define custom complex transformation, by supporting user defined function (UDF) written in Python or Java.nyc_taxi_demo(pu_loc_id as int, do_loc_id as int, pu_time as string, do_time as string, trip_distance as double, fare_amount as double) project duration_second = (to_unix_timestamp(do_time, "%Y/%-m/%-d %-H%-M") - to_unix_timestamp(pu_time, "%Y/%-m/%-d %-H%-M"))| project speed_mph = trip_distance * 3600 / duration_second;This declarative logic runs in a new high-performance DSL engine. We provide HELM Chart to deploy this service in a container-based technology such as the Azure Kubernetes Service (AKS). The transformation engine can also run as a standalone executable, which is a HTTP server that can be used to transform data for testing purposes. feathrfeaturestore/feathrpiperlatest.curl -s -H"content-typeapplication/json" http//localhost8000/process -d'{"requests" [{"pipeline" "nyc_taxi_demo_3_local_compute","data" {"pu_loc_id" 41,"do_loc_id" 57,"pu_time" "2020/4/1 041","do_time" "2020/4/1 056","trip_distance" 6.79,"fare_amount" 21.0}}]}' It also provides the ability to auto-generate the DSL file if there are already predefined feature transformations, which have been created for the offline-transformation. Online transformation performance benchmarkIt is imperative that online transformation performs close to real-time and meets low latency demand with high queries per second (QPS) transformation for many of the enterprise customers’ needs. To determine the performance, we have conducted a benchmark on three tests. First, deployment on AKS with traffic going through ingress controller. Second, traffic going through AKS internal load balance, and finally, via the localhost.  Benchmark ATraffic going through ingress controller (AKS)Infrastructure setupTest agent runs on 1 pod on node with size Standard_D8ds_v5Transform function deployed as docker image running on 1 pod on a different node with size Standard_D8ds_v5 in same AKS.Agent sends request thru service hostname which means traffic will go thru ingress controller.Test command ab -k -c {concurrency_count} -n 1000000 (http//feathr-online.trafficmanager.net/healthz)Benchmark A resultTotal RequestsConcurrencyp90p95p99request/sec1000000100349437101000000200681543685100000030010111843378100000040013152143220100000050016192442406Benchmark BTraffic goes thru AKS internal load balancer (AKS)Benchmark BInfrastructure setupTest agent runs on 1 pod on node with size Standard_D8ds_v5Transform function deployed as docker image running on 1 pod on a different node with size Standard_D8ds_v5 in same AKS.Agent sends request thru service pip which means traffic will go thru internal load balancer.Test command ab -k -c {concurrency_count} -n 1000000 ab -k -c 100 -n 1000000 http//10.0.187.2/healthzBenchmark B resultTotal RequestsConcurrencyp90p95p99request/sec10000001003444767310000002005784703510000003009101246613100000040011121545362100000050014151944941 Benchmark CTraffic going through local host (AKS)Infrastructure setupTest agent runs on 1 pod on node with size Standard_D8ds_v5.Transform function deployed as docker image running on the same pod.Agent sends request thru localhost which means there are not network traffic at all.Test command ab -k -c {concurrency_count} -n 1000000 (http//localhost/healthz)Benchmark C resultTotal RequestsConcurrencyp90p95p99Request/sec1000000100223594661000000200445594331000000300668601841000000400891059622100000050010111459031Benchmark summaryIf transform service and up-streaming are in same host/pod, the p95 latency result is very good, stay within 10ms if concurrency < 500.If transform service and up-streaming are in different host/pod, the p95 latency result might get reduced with 2-4ms, if traffic goes thru internal load balance.If transform service and up-streaming are in different host/pod, the p95 latency result might get reduced with 2-8ms, if traffic goes thru ingress controller.Benchmark thanks to Blair Chan and Chen Xu.For more details, check out the online transformation guide.Getting started with sandbox environmentThis is an exciting feature, especially for data scientists, who may not have the necessary infrastructure background or know how to deploy the infrastructure in the cloud. The sandbox is a fully-featured, quick-start Feathr environment that enables organizations to rapidly prototype various capabilities of Feathr without the burden of full-scale infrastructure deployment. It is designed to make it easier for users to get started quickly, validate feature definitions and new ideas, and interactive experience.By default, it comes with a Jupyter notebook environment to interact with the Feathr platform. Users can also use the user experience (UX) to visualize the features, lineage, and other capabilities.To get started, check out the quick start guide to local sandbox.Feathr with MlOps V2 acceleratorMLOps V2 solution accelerator provides a modular end-to-end approach to MLOps in Azure based on pattern architecture. We are pleased to announce an initial integration of Feathr to the classical pattern that enables Terraform-based infrastructure deployment as part of the infrastructure provisioning with Azure machine learning (AML) workspace. With this integration, enterprise customers can use the templates to customize their continuous integration and continuous delivery (CI/CD) workflows to run end-to-end MlOps in their organization. Check out the Feathr integration with MLOps V2 deployment guide.Feathr GUI enhancementWe have added a number of enhancements to the graphical user interface (GUI) to improve the usability. These include support for registering features, support for deleting features, support for displaying version, and quick access to lineage via the top menu. Try out our demo UX on our live demo site.What's nextThe Feathr journey has just begun, this is the first stop to many great things to come. So, stay tuned for many enterprise enhancements, security, monitoring, and compliance features with a more enriched MLOps experience. Check out how you can also contribute to this great project, and if you have not already, you can join our slack channel here.The post Announcing the availability of Feathr 1.0 appeared first on Microsoft Open Source Blog.


2022.1.7929 maintenance release
2022.1.7929 maintenance release

Another (possibly final!) Seq 2022.1 maintenance release is now available from Docker Hub, AWS ECR, and at datalust.co/download.What's included?In this release you'll find some small bug fixes and improvementsFuture events will use server timestamps by default (#1570) - events arriving with timestamps greater than 57 minutes into the future will be renumbered using the server clock, avoiding usability and performance problems.Users with read-only permissions can now view Alerts (#1606) - underlying API permissions have been tweaked to enable read-only users to successfully view the Alerts dashboard.Long usernames no longer push personal alert titles off-screen (#1605) - this display bug made personal alert titles unreadable for some users.Under Docker, TLS certificates can be provided in PEM format (#1609) - Seq now accepts this popular certificate format, along with PFX.Under Docker, Seq will push active file pages out of RAM to recover cache space (#1608) - this helps avoid a common failure mode whereby the Linux page cache consumes all available RAM, causing Seq queries to execute slowly.Indexing errors are now properly logged (#1604) - in addition to displaying a message in the server status indicator, Seq now correctly logs errors raised during indexing.Update to .NET SDK 6.0.301 (#1610) - includes .NET Runtime 6.0.6.UpgradingSeq 2022.1.7929 release is a highly-compatible, in-place upgrade for recent Seq versions. On Windows, run the MSI file and click through the post-installation setup wizard. On Linux, docker pull datalust/seqlatest and restart your container.We're looking forward to writing more about the next major Seq release in the coming weeks. In the meantime, if you have questions, or need any help upgrading to this maintenance release, please reach out via [email protected]. 👋


Solved Error: Docker-Compose up ContainerName "ERR ...
Category: Docker

docker-compose up ms_sql_server Starting mssql_DT ... error &nbsp; ERROR ...


Views: 1143 Likes: 89

Login to Continue, We will bring you back to this content 0



For peering opportunity Autonomouse System Number: AS401345 Custom Software Development at ErnesTech Email Address[email protected]