並び順

ブックマーク数

期間指定

  • から
  • まで

1 - 16 件 / 16件

新着順 人気順

evaluateの検索結果1 - 16 件 / 16件

タグ検索の該当結果が少ないため、タイトル検索結果を表示しています。

evaluateに関するエントリは16件あります。 機械学習戦争ui などが関連タグです。 人気エントリには 『How to evaluate MLOps Platforms』などがあります。
  • How to evaluate MLOps Platforms

    Companies that pioneered application of AI at scale did so using in-house ML platforms (facebook, uber, LinkedIn etc.). These capabilities are now available in off-the-shelf products. The rush to MLOps has led to too much choice. There are hundreds of tools and at least 40 platforms available: Image by Thoughtworks, from Guide to Evaluating MLOps PlatformsThis is a very difficult landscape to navi

      How to evaluate MLOps Platforms
    • GitHub - flutter/gallery: Flutter Gallery was a resource to help developers evaluate and use Flutter

      You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session. Dismiss alert

        GitHub - flutter/gallery: Flutter Gallery was a resource to help developers evaluate and use Flutter
      • GitHub - Hexagon/croner: Trigger functions or evaluate cron expressions in JavaScript or TypeScript. No dependencies. Most features. Node. Deno. Bun. Browser.

        You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session. Dismiss alert

          GitHub - Hexagon/croner: Trigger functions or evaluate cron expressions in JavaScript or TypeScript. No dependencies. Most features. Node. Deno. Bun. Browser.
        • GitHub - evidentlyai/evidently: Evaluate and monitor ML models from validation to production. Join our Discord: https://discord.com/invite/xZjKRaNp8b

          You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session. Dismiss alert

            GitHub - evidentlyai/evidently: Evaluate and monitor ML models from validation to production. Join our Discord: https://discord.com/invite/xZjKRaNp8b
          • Evaluate the reliability of Retrieval Augmented Generation applications using Amazon Bedrock | Amazon Web Services

            AWS Machine Learning Blog Evaluate the reliability of Retrieval Augmented Generation applications using Amazon Bedrock Retrieval Augmented Generation (RAG) is a technique that enhances large language models (LLMs) by incorporating external knowledge sources. It allows LLMs to reference authoritative knowledge bases or internal repositories before generating responses, producing output tailored to

              Evaluate the reliability of Retrieval Augmented Generation applications using Amazon Bedrock | Amazon Web Services
            • Excel VBAにおける伝家の宝刀。Evaluateの使い方 - えくせるちゅんちゅん

              今回はExcel VBAにおける伝家の宝刀。Application.Evaluateメソッドを紹介します。 はじめに Application.Evaluateメソッドとは Evaluateについて Evaluateの基本文法 Evaluateと角括弧の違い Evaluateとシート上で書いた数式の仕様の違い セミコロンとコンマの違い Evaluateの活用法 高精度時刻の取得 配列の作成 よくハマるミス 角括弧でVBAは使えない 引数に配列は使えない 戻り値には参照型と値型がある まとめ はじめに 私のフォロワーさんたちは、私が #VBAクイズ や #ワンライナー に答えるときに、角括弧を使ったVBAで回答しているのを見たことがあるかもしれません。 結構使ってる割にはちゃんとした解説をしたことがないので、一度しっかり説明しておかなくてはならないと常々思っておりました。 いつもネタコードとし

                Excel VBAにおける伝家の宝刀。Evaluateの使い方 - えくせるちゅんちゅん
              • Evaluate VMware Products

                Scalable, elastic private cloud IaaS solution. Key Technologies: vSphere  |  vSAN  |  NSX  |  Aria

                  Evaluate VMware Products
                • GitHub - microsoft/prompty: Prompty makes it easy to create, manage, debug, and evaluate LLM prompts for your AI applications. Prompty is an asset class and format for LLM prompts designed to enhance observability, understandability, and portability for d

                  You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session. Dismiss alert

                    GitHub - microsoft/prompty: Prompty makes it easy to create, manage, debug, and evaluate LLM prompts for your AI applications. Prompty is an asset class and format for LLM prompts designed to enhance observability, understandability, and portability for d
                  • Evaluate prompts in the developer console

                    When building AI-powered applications, prompt quality significantly impacts results. But crafting high quality prompts is challenging, requiring deep knowledge of your application's needs and expertise with large language models. To speed up development and improve outcomes, we've streamlined this process to make it easier for users to produce high quality prompts. You can now generate, test, and

                      Evaluate prompts in the developer console
                    • Top 8 TypeScript ORMs, Query Builders, Libraries: Evaluate Type Safety

                      Database tools Top 8 TypeScript ORMs, query builders, & database libraries: evaluating type safety IntroductionEvaluating the level of type safety a TypeScript ORM provides out-of-the-box can be time consuming. This article briefly assesses the type safety of libraries considered in Top 11 Node.js ORMs, Query Builders & Database Libraries in 2022. While all of the libraries considered in this arti

                        Top 8 TypeScript ORMs, Query Builders, Libraries: Evaluate Type Safety
                      • GitHub - tc39/proposal-defer-import-eval: A proposal for introducing a way to defer evaluate of a module

                        You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session. Dismiss alert

                          GitHub - tc39/proposal-defer-import-eval: A proposal for introducing a way to defer evaluate of a module
                        • MLflow LLM Evaluate で Claude 3.5 Sonnet、GPT-4o、Gemini 1.5 Pro との QA を評価する - Qiita

                          MLflow LLM Evaluate で Claude 3.5 Sonnet、GPT-4o、Gemini 1.5 Pro との QA を評価するAWSAzureDatabricksMLflowLLM はじめに 株式会社NTTデータ デジタルサクセスソリューション事業部 で AWS や Databricks を推進している nttd-saitouyun です。 以下の記事で、Databricks から Amazon Bedrock の Claude 3.5 Sonnet、Azure OpenAI の GPT-4o、Google Cloud Vertex AI の Gemini 1.5 Pro を利用できるように設定してきました。 Databricks の Mosaic AI Model Serving Endpoint から Amazon Bedrock の LLM を利用する Datab

                            MLflow LLM Evaluate で Claude 3.5 Sonnet、GPT-4o、Gemini 1.5 Pro との QA を評価する - Qiita
                          • GitHub - IBM/AutoMLPipeline.jl: A package that makes it trivial to create and evaluate machine learning pipeline architectures.

                            You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session. Dismiss alert

                              GitHub - IBM/AutoMLPipeline.jl: A package that makes it trivial to create and evaluate machine learning pipeline architectures.
                            • Online searches to evaluate misinformation can increase its perceived veracity - Nature

                              Concern over the impact of misinformation has continued to grow, as high levels of belief in misinformation have threatened democratic legitimacy in the United States1 and global public health during the COVID-19 pandemic2. Considerable attention among scholars, media and policymakers alike has been paid to the role of social media platforms in the spread of, and belief in, misinformation3,4, with

                                Online searches to evaluate misinformation can increase its perceived veracity - Nature
                              • 【機械学習】Hugging faceの評価指標計算ライブラリ「Evaluate」を使ってみた。

                                NLPのライブラリ「transformers」などで有名なHugging face社の評価値計算ライブラリ、「Evaluate」を使ってみます。 本記事のGoogleColabで動かせるコードをこちらで公開中です。 Hugging faceの新ライブラリ「Evaluate」を使ってみた。 こんにちは。PlayGroundのデータコースに所属している安藤太一です。 NLPモデルのライブラリ「transformers」などで有名なHugging face社が最近新しいライブラリ、「Evaluate」を発表したので、使ってみようと思います。 目次 Evaluateとは 基本的な評価値の計算 Evaluatorを使う まとめ 参考文献 Evaluateとは Evaluateはモデルの評価や比較、性能のレポートをより簡単に、標準的に行うためのライブラリです。 既存の評価指標(メトリクス)はNLP(自

                                  【機械学習】Hugging faceの評価指標計算ライブラリ「Evaluate」を使ってみた。
                                • How to evaluate the risk of nuclear war

                                  The threat of nuclear war looms over the invasion of Ukraine (Credit: Diego Herrera/Getty Images) How do researchers gauge the probability and severity of nuclear war? Catastrophic risk expert Seth Baum explains. One day last week, I woke up in the morning and looked out the window to see the Sun was shining. My neighbourhood in the New York City area was calm and normal. "OK good," I said to myse

                                    How to evaluate the risk of nuclear war
                                  1

                                  新着記事