Blog

  • webpack-cli

    技术栈

    • vue2 + vuex + vue-router + axios SPA
    • koa2 + koa-router + lowdb + ejs 模板
    • webpack3 + HMR
    • 模拟后端 + 环境配置 + 打包
    • ES6+构建,ES6+代码

    开发环境:

    node: V8+

    为了能正常运行,建议node升级到V8+

    使用方法

    npm run dev
    1. 本地开发的时候运行此命令,不需要启动后端服务,会自动开启本地服务。
    2. 开发服务器启动后,双击命令行上的链接地址即可在浏览器中打开首页。你可以在config文件夹中修改你的开发服务器配置。
      编写代码保存,浏览器即可热刷新。
    3. 因为前端入口文件只有一个build/server/views/template.ejs,用html-webpack-plugin插件渲染,通过给html-webpack-plugin传递showHtmlWebpackPlugin:true可以在页面渲染的时候输出htmlWebpackPlugin信息
    4. 为了方便,npm run dev改成不走后端路由
    npm run build

    构建生成未压缩的代码到dist目录,方便检查打包之后,而又没有压缩的代码。没有热加载功能

    npm run prod

    打包生成测试环境或者线上环境代码时使用这个命令,此时代码是压缩过的

    npm run server

    开启本地mock api server服务,如果没有dist/manifest.json文件,则会在额外执行一次npm run build --banwatch打包前端代码到硬盘,如果dist/manifest.json文件已经存在,则不会再执行npm run build --banwatch

    npm start

    同时开启本地mock api server服务和前端自动化构建服务(带有前端热加载功能)

    npm watch

    nodemon监听build目录里的代码变化(即监听构建代码),如果代码有变化则会自动重启服务

    npm run build  distcustom

    自定义输出目录为distcustom

    npm run build distcustom --banwatch

    自定义输出目录为distcustom,执行一次打包后就退出,即不监听文件变化

    npm run build --report

    使用webpack-bundle-analyzer插件分析webpack打包生成的资源

    目录结构

    项目根目录

    |---build 项目构建代码  
    |  |---config 构建项目用到的配置  
    |  |---task 构建任务入口  
    |  |---server 本地server服务器 
    |  |---webpack.config.dev.js webpack开发配置  
    |  |---webpack.config.prod.js webpack上线配置  
    |---config 项目配置  
    |---node_modules node模块  
    |---src 应用源码目录  
    |  |---test 业务级文件入口
    |  |  |---assets 业务资源图片以及样式 
    |  |  |---components 业务组件  
    |  |  |---views 业务入口 
    |  |  |---api.js  api调用
    |  |  |---router.js 业务路由
    |  |---example components组件demo案例目录  
    |  |  |---pages  demo案例
    |  |  |---config.json  demo路由相关配置
    |  |  |---demos.vue demo入口文件
    |  |  |---router.js demo路由 
    |  |---global 全局性资源  
    |  |  |---assets   图片以及css文件  
    |  |  |---iconfont 字体图标 
    |  |---router SPA站点路由 
    |  |  |---index.js 
    |  |---store vuex模块
    |  |  |---index.js 
    |---dist 打包生成代码目录  
    |---zip  zip包目录  
    |---.babelrc      babel运行时配置  
    |---.editorconfig 编辑器配置  
    |---.gitignore    git忽略配置  
    |---package.json  npm配置  
    |---postcss.config.js postcss插件配置  
    |---README.md 项目自述  
    

    温馨提示

    vscode安装的vetur插件默认不格式html,格式化html,要在首选项下的设置中配置:

    //https://github.com/vuejs/vetur/issues/99
    "vetur.format.defaultFormatter.html": "js-beautify-html",
      "vetur.format.defaultFormatterOptions": {
        "js-beautify-html": {
          // js-beautify-html settings, see https://github.com/vuejs/vetur/blob/master/server/src/modes/template/services/htmlFormat.ts
          "wrap_attributes": "force-aligned"
        }
      }

    Visit original content creator repository
    https://github.com/zhenghuahou/webpack-cli

  • dbt-sap-hana-cloud

    DBT SAP HANA Cloud adapter

    REUSE status

    Description

    The DBT SAP HANA Cloud Adapter allows seamless integration of dbt with SAP HANA Cloud, enabling data transformation, modeling, and orchestration.

    Requirements

    python>=3.9,
    dbt-core>=1.9.0,
    dbt-adapters>=1.7.2,
    dbt-common>=1.3.0
    hdbcli>=2.22.32
    

    Python virtual environment

    • The python virtual environment needs to activated before we begin the installation process

      python3 -m venv <name>
      

      enter the name for the virtual environment in the place holder.

    • Activate the virutal environment

      source <name>/bin/activate
      

      use the same name which you gave above.

    Download and Installation

    Step 1. Install dbt-sap-hana-cloud adapter

    1. Clone the dbt-sap-hana-cloud repositroy
    2. Navigate to the cloned repository

      cd /path/to/dbt-sap-hana-cloud

    3. for installation use the below command
      pip3 install .
      

    Step 2. Create a dbt project

    • Initialize a New dbt Project in a different location
      dbt init
      
      choose dbt-saphanacloud from the list and add the fields asked after the selection Step 3. Profile setup (in case the dbt init command fails)
    1. Edit the $HOME/.dbt/profiles.yml file within the .dbt folder (create if it does not exist).
    2. Add the following configuration, replacing placeholders with your SAP HANA credentials:

    Sample profile

    my-sap-hana-cloud-profile:
      target: dev
      outputs:
        dev:
          type: saphanacloud
          host: <host>       # SAP HANA cloud host address
          port: <port>       # Port for SAP HANA cloud
          user: <user>       # SAP HANA cloud username
          password: <password> # SAP HANA cloud password
          database: <database> # Database to connect to
          schema: <schema>   # Schema to use
          threads: <threads> # Number of threads you want to use

    Step 6. Link Profile to dbt Project(in case the dbt init command fails)

    • In your dbt_project.yml file (located in your dbt project folder), reference the profile name:
      profile: my-sap-hana-cloud-profile

    Step 7. Test Connection

    • In the terminal, naviagte to you dbt project folder
      cd /path/to/dbt-project
      
    • Run the following command to ensure dbt can connect to SAP HANA Cloud:
      dbt debug
      

    Test cases for adapter

    Step 1. Navigate to the dbt-sap-hana-cloud repository

    cd /path/to/dbt-sap-hana-cloud
    

    Step 2. Install Development Requirements

    • In the dbt-sap-hana-cloud folder, install the dev_requirements.txt:
      pip3 install -r dev_requirements.txt
      

    Step 3. Create a test.env File

    • In the same folder as the adapter, create a test.env file and add the following:

      DBT_HANA_HOST=<host>       # SAP HANA Cloud host address
      DBT_HANA_PORT=<port>       # SAP HANA Cloud port
      DBT_HANA_USER=<user>       # SAP HANA Cloud username
      DBT_HANA_PASSWORD=<password> # SAP HANA Cloud password
      DBT_HANA_DATABASE=<database> # Database to connect to
      DBT_HANA_SCHEMA=<schema>   # Schema to use
      DBT_HANA_THREADS=<threads> # number of threads you want to use
      # Create 3 users in hana db with the same name as below to test grants
      DBT_TEST_USER_1= DBT_TEST_USER_1
      DBT_TEST_USER_2= DBT_TEST_USER_2
      DBT_TEST_USER_3= DBT_TEST_USER_3
      

    Step 4. Test adapter functionality

    • Run the following command to execute functional tests:
      python3 -m pytest tests/functional/
      

    DBT SAP HANA Cloud specific configuration

    Table Type

    • If you want to define the type of table created for an incremental model or a table model, you can do so by adding this configuration to the model inside the config block.
      table_type='row'
      
      there are two options available for the table type either ‘row’ or ‘column’.

    Note: The default type of table will be column if nothing is mentioned.

    Index

    There are five types of indexes in the DBT SAP HANA Cloud adapter.

    1. Row table
      • BTREE
      • CPBTREE
    2. Column table
      • INVERTED VALUE
      • INVERTED HASH
      • INVERTED INDIVIDUAL

    Below are the example configuration to use them

    • Row table
      {{
        config(
          materialized = "table",
          table_type='row',
          indexes=[
            {'columns': ['column_a'], 'type': 'BTREE'},
            {'columns': ['column_b'], 'type': 'BTREE', 'unique': True},
            {'columns': ['column_a', 'column_b'], 'type': 'CPBTREE'},
            {'columns': ['column_b', 'column_a'], 'type': 'CPBTREE', 'unique': True}
          ]
        )
      }}
      
    • Column table
      {{
        config(
          materialized = "table",
          table_type='column',
          indexes=[
            {'columns': ['column_b'], 'type': 'INVERTED VALUE'},
            {'columns': ['column_a', 'column_b'], 'type': 'INVERTED VALUE'},
            {'columns': ['column_b', 'column_a'], 'type': 'INVERTED VALUE', 'unique': True},
            {'columns': ['column_b', 'column_c'], 'type': 'INVERTED HASH', 'unique': True},
            {'columns': ['column_a', 'column_c'], 'type': 'INVERTED INDIVIDUAL', 'unique': True}
          ]
        )
      }}
      

    Unique Keys as Primary key

    You can now set unique keys as the primary key in an incremental and table model by simply enabling a flag. For example, you can configure it like this:

    incremental model:

    {{
      config(
        materialized = "incremental",
        unique_key = ['id', 'name', 'county'],
        unique_as_primary = true
      )
    }}
    

    table model:

    {{
      config(
        materialized = "table",
        unique_key = ['id', 'name', 'county'],
        unique_as_primary = true
      )
    }}
    

    Query partitions in incremental models

    You can divide the transformation of an incremental model into multiple batches using the query_partitions option. This wraps the SQL query in an outer query, which is then filtered based on the respective partition value.

    Example

    Model definition

    {{
        config(
            materialized="incremental",
            unique_key=["ID"],
            unique_as_primary=true,
            query_partitions = [
              {
                      'column':'CATEGORY',
                      'type':'list',
                      'partitions':['train','plane','car'],
                      'default_partition_required':False
              }
            ],
    
        )
    }}
    
      select 1 as ID, 'car' as CATEGORY  from sys.dummy
      union all
      select 2 as ID, 'train' as CATEGORY  from sys.dummy
      union all
      select 3 as ID, 'plane' as CATEGORY  from sys.dummy
    

    Executed query for batch 1 (CATEGORY = 'train')

    select 
        *
    from (
    
        select 1 as ID, 'car' as CATEGORY  from sys.dummy
        union all
        select 2 as ID, 'train' as CATEGORY  from sys.dummy
        union all
        select 3 as ID, 'plane' as CATEGORY  from sys.dummy
    
    ) t
    
    where "CATEGORY" = 'train'

    Executed query for batch 2 (CATEGORY = 'plane')

    select 
        *
    from (
    
        select 1 as ID, 'car' as CATEGORY  from sys.dummy
        union all
        select 2 as ID, 'train' as CATEGORY  from sys.dummy
        union all
        select 3 as ID, 'plane' as CATEGORY  from sys.dummy
    
    ) t
    
    where "CATEGORY" = 'plane'

    Executed query for batch 3 (CATEGORY = 'car')

    select 
        *
    from (
    
        select 1 as ID, 'car' as CATEGORY  from sys.dummy
        union all
        select 2 as ID, 'train' as CATEGORY  from sys.dummy
        union all
        select 3 as ID, 'plane' as CATEGORY  from sys.dummy
    
    ) t
    
    where "CATEGORY" = 'car'

    Configuration options

    The query_partitions configuration option expects a list of query_partitions represented as objects. Each object has the following properties:

    • column: The column after which the partitions (batches) are created.
    • partitions: The definition of the partition values.
    • type: The type of the partitions, which determines how the filter is applied. Possible variants:
      • list: The value must match one of the partition values exactly (e.g., CATEGORY = 'train', CATEGORY = 'car').
      • range: The partition values are sorted in ascending order. A value must be between two partition values (e.g., CREATE_DATE >= '2023-01-01' AND CREATE_DATE < '2024-01-01', CREATE_DATE >= '2024-01-01' AND CREATE_DATE < '2025-01-01').
    • default_partition_required: Defines if a default partition should be added for all rows that do not match any partition value. Possible values:
      • true
      • false

    Note: Currently, a transformation can be partitioned after a maximum of two columns.

    Custom sqlscript materialization

    Some materializations are very complicated and cannot be executed using a standard dbt materializations. Using the sqlscript materialization, it is possible to define custom logic with SQL Script.

    Example:

    {{
        config(
          materialized="sqlscript"
        )
    }}
    
    DO BEGIN
    
      -- Transformation written in sql script
    
    END
    

    Automatic creation of virtual tables

    dbt is intended for transforming data that already resides in a database (the “T” in an ELT process).

    Since SAP HANA Cloud can access remote data using SQL (Smart Data Access (SDA) and Smart Data Integration (SDI)), dbt can also be used to extract and load data.

    The saphanacloud dbt adapter includes a macro that automatically creates virtual tables.

    To use this feature, you need to add remote_database and remote_schema as source metadata. Additionally, include the metadata value virtual_table with the boolean value true. The name of the dbt source must match the name of the remote source in SAP HANA Cloud.

    Example source definition:

    version: 2
    
    sources:
      - name: CRM
        schema: RAW_DATA
        meta: {
          virtual_table: true,
          remote_database: 'NULL',
          remote_schema: 'DEFAULT'
        }
        tables:
          - name: CUSTOMERS
          - name: SUPPLIERS
          - name: PRODUCTS
            identifier: VT_PRODUCTS

    Then the following macro has be called:

    dbt run-operation create_sources

    This command checks if all required virtual tables exist and creates them if they do not. In the example, it will execute the following SQL statements:

    CREATE VIRTUAL TABLE RAW_DATA.CUSTOMERS AT "CRM"."NULL"."DEFAULT"."CUSTOMERS";
    CREATE VIRTUAL TABLE RAW_DATA.SUPPLIERS AT "CRM"."NULL"."DEFAULT"."SUPPLIERS";
    CREATE VIRTUAL TABLE RAW_DATA.VT_PRODUCTS AT "CRM"."NULL"."DEFAULT"."PRODUCTS";

    Note: If the name of the virtual table should be different from the name of the table in the remote source, you can use the identifier property of the table in the source definition.

    SAP HANA Native Storage Extension (NSE)

    You can enable NSE for tables or columns by configuring either the table or columns of a table to be page loadable. This can be done in both incremental and table materialization. In the below example you can see the config:

    {{ config(
        materialized='table',
        nse_page_loadable={"type": "table"}
      ) 
    }}
    {{ config(
    materialized='table',
    nse_page_loadable = { "type": "column","names": "col1"}
    ) 
    }}
    {{ config(
        materialized='table',
        nse_page_loadable={"type": "column", "names": ["col1", "col2"]}
    ) 
    }}

    Known Issues

    No known issues

    How to obtain support

    Create an issue in this repository if you find a bug or have questions about the content.

    For additional support, ask a question in SAP Community.

    Contributing

    If you wish to contribute code, offer fixes or improvements, please send a pull request. Due to legal reasons, contributors will be asked to accept a DCO when they create the first pull request to this project. This happens in an automated fashion during the submission process. SAP uses the standard DCO text of the Linux Foundation.

    License

    Copyright (c) 2024 SAP SE or an SAP affiliate company. All rights reserved. This project is licensed under the Apache Software License, version 2.0 except as noted otherwise in the LICENSE file.

    Visit original content creator repository https://github.com/SAP-samples/dbt-sap-hana-cloud
  • erlls

    ErlLS

    erlls vscode version Actions Status License

    Erlang language server.

    Supported LSP features

    Editor integrations

    ErlLS can be used with any LSP clients. Here are a few examples.

    Visual Studio Code / Visual Studio Code for the Web

    Please install erlls extension.

    There is no need to install the erlls binary using the $ cargo install command as the extension already includes the WebAssembly build.

    Settings (settings.json)

    To include the Erlang/OTP applications in the search target, please specify the directory as follows:

    {
        "erlls.erlLibs": "/usr/local/lib/erlang/lib/:_checkouts:_build/default/lib"
    }

    NOTE:

    • The actual path may vary depending on the environment.
    • In VSCode Web, it’s not possible to search applications located outside of the workspace.

    Emacs (lsp-mode)

    1. Install erlls command.
    $ cargo install erlls
    1. Add the following code to your .emacs file.
    (with-eval-after-load 'lsp-mode
      (add-to-list 'lsp-language-id-configuration
                   '(erlang-mode . "erlang")))
    
    (lsp-register-client
     (make-lsp-client :new-connection (lsp-stdio-connection "erlls")
                      :activation-fn (lsp-activate-on "erlang")
                      :priority -1
                      :server-id 'erlls))
    Visit original content creator repository https://github.com/sile/erlls
  • 42Webserv

    webserver

    • (автоперевод сабжекта, переведенного с французского на английский)

    Здесь вы окончательно поймете почему URL начинается с HTTP.
    Цель данного проекта написать собственный HTTP сервер. Вы должны будете протестировать его на реальном браузере. HTTP один из самых используемых протоколов в интернете. Знания в данной таинственной области является очень полезным для студента, даже если вы никогда не будете работать с веб-сайтами.

    Введение

    The Hypertext transfer protocol (Протокол передачи гипертекста) или HTTP — протокол прикладного уровня применяемый в распределенных, совместных и гипермедийных информационных системах.
    HTTP является фундаментом для передачи данных по Всемирной компьютерной сети (World Wide Web). В HTTP гипертекстовые документы включают в себя гиперссылки на другие ресурсы, к которым пользователь легко может получить доступ, например простой клик мышки по картинке в веб-браузере.
    Протокол HTTP был разработан, чтобы облегчить работу с гипертекстом, что в свою очередь облегчает работу со Всемирной сетью.
    Первичным функционалом веб-сервера являются хранение, обработка веб-страниц, также доставка веб-страниц клиентам.
    Связь между клиентом и сервером осуществляется за счет использования протокола передачи гипертекста HTTP.
    Обычно в качестве объекта доставки выступают HTML документы, которые могут включать изображения, таблицы стилей и скрипты в дополнении к текстовому контенту.
    Для веб-сайта с высоким трафиком могут использоваться несколько веб-серверов.
    В качестве агента пользователя в основном выступают веб-браузер или поисковый робот. Они начинают коммуникацию путем отправки запроса на получение определенного ресурса используя HTTP, и сервер в ответ отправляет содержимое ресурса, в ином случае сообщение об ошибке. Под ресурсом обычно имеется в виду реальный файл, находящийся во вторичном хранилище сервера, но это является необязательным кейсом и зависит от того как реализован веб-сервер.
    Если основной функционал веб-сервера занимается хранением, обработкой и доставкой контента, то в полной реализации веб-сервера включены разные способы получения контента от клиентов. Данная реализация дает возможность получения веб-форм, включая загрузку (upload) файлов.

    Основная часть

    Название программы: webserv
    Файлы: Любые
    Makefile: Необходим
    Функции: Все функции в С++ 98. htons, htonl, ntohs, ntohl, select, poll, epoll, kqueue, socket, accept, listen, send, recv, bind, connect, inet_addr, setsockopt, getsockname, fcntl.
    libft: Запрещен
    Описание: Напишите HTTP сервер на С++ 98. Однако всегда предпочтительнее использовать аналоги в С++.
    При программировании на С++ вы должны использовать С++98 стандарт. Ваш проект должен компилироваться в данном стандарте.
    Внешние библиотеки запрещены, Boost и т.д.
    Старайтесь всегда использовать С++ стиль написания кода (например вместо <string.h>)
    Ваш сервер должен быть совместим с веб-браузером, который вы выбрали.
    Мы будем предполагать, что Nginx совместим с HTTP 1.1 и может использоваться для сравнение заголовков и ответов.
    В сабже, а также в жизни мы рекомендуем вам использовать функцию poll, но вы можете использовать аналоги типа: select, kqueue, epoll.
    Сервер должен быть неблокирующим. И использовать только 1 poll(или аналог) для всех IO между клиентом и сервером(с учетом listens).
    poll(или аналог) должен проверять чтение и запись в одно и тоже время.
    Ваш сервер никогда не блокирует, и в случае необходимости клиент должен суметь отключиться.
    Вам нельзя производить операцию чтение и операцию записи без использования функции poll(или аналога).
    Вам запрещено проверять значения глобальной переменной errno после ошибки в функциях read и write.
    Запрос отправленный на ваш сервер не должен висеть вечно.
    Ваш сервер обязан иметь error page: стандартный или свой.
    Ваша программа не должна иметь утечек и не должна крашиться (даже при нехватке памяти, когда все уже инициализировано)
    Нельзя использовать fork, за исключением CGI.
    Нельзя запускать другой webserver через execve().
    Ваша программа должна иметь конфигурационный файл, который указывается либо как аргумент программы, либо должен быть статичным.
    Вам не надо использовать poll(или аналог) до чтения вашего конфигурационного файла.
    Ваш веб-сервер должен суметь обслужить полностью статичный сайт.
    Клиент должен иметь возможность загрузить(upload) файлы.
    Ваши HTTP статус-коды должны быть точны.
    Вы должны минимум реализовать методы GET, POST и DELETE.
    Ваш сервер должен любой ценной оставаться доступным, при любых стресс тестах.
    Ваш сервер должен иметь возможность прослушивать несколько портов.
    Вам разрешено использование fcntl, потому что в Mac OS X функция write реализована по-другому, нежели в других Unix OS!
    Вы должны использовать неблокируемый FD для того, чтобы получить аналогичное поведение (как в других OS).
    Благодаря использованию неблокируемого FD, вы сможете использовать функции read/recv или write/send без опроса(polling) и ваш сервер будет неблокирующим. Но мы против этого.
    Использование read/recv или write/send без опроса(polling) запрещено, в случае пренебрежения данного правила будет выставлена оценка 0.
    Вы можете использовать fcntl в следующей форме:
    fcntl(fd, F_SETFL, O_NONBLOCK);
    Любые другие флаги запрещены.
    Конфигурационный файл
    Вы можете вдохновить себя посмотрев конфигурационный файл Nginx, а именно часть ‘server’.
    В конфигурационном файле мы должны иметь следующие поля:
    Выбор порта и хоста для каждого ‘server'(обяз).
    Установка имени_сервера(необяз).
    Первый сервер для host:port должен быть дефолтным для этого host:port (это значит, что он должен отвечать на все запросы, которые не относятся другому серверу).
    Установка дефолтного error page.
    Лимит размера тела клиента.
    Установка маршрутов с одним или несколькими следующими правилами/конфигурациями (маршруты не будут использовать redexp):

    • Определить список разрешенных HTTP методов для маршрута.
    • Определить HTTP редиректы.
    • Определить директорию или расположение файла, где должен происходить поиск файла (для примера: если url /kapouet находится в /tmp/www, то url /kapouet/pouic/toto/pouet будет /tmp/www/pouic/toto/pouet)
    • Включить или выключить прослушивание директории.
    • Установка дефолтного файла, который будет отправлен как ответ в случае, если запрос является директорией.
    • Дать маршруту возможность загружать файлы и определить место их хранения.
    • Исполнение CGI на основе определенного расширения (например .php)
      — Знаете что такое CGI? → link.
      — Так как вы не будете вызывать CGI напрямую используйте полный путь как PATH_INFO.
      — Помните, что фрагментированный запрос должен быть обратно собран сервером и CGI будет ожидать EOF, в качестве конца тела.
      — Тоже самое применимо и для вывода CGI, если не указан content_length.
      — Ваша программа должна вызывать CGI вместе с файлом, который указан в качестве первого аргумента.
      — CGI должен запускаться в правильной директории для доступа к файлам по относительному пути.
      — Ваш сервер должен работать только с одним CGI (php-cgi, python…).
      Для проверки вы должны предоставить несколько конфигурационных файлов и базовые файлы по-умолчанию для тестирования.
      Если у вас возник вопрос по поводу некоторого поведения, вам следует сравнить это с Nginx. Например: проверьте как работает server_name. Мы также предоставили вам небольшой тестер, он не настолько хорошо, чтобы с ним сдать проект. Но он поможет вам словить некоторые особенные баги.
      Пожалуйста прочитайте RFC и проведите тесты с telnet и Nginx перед тем как начать этот проект. Даже если вы не будете реализовывать все в RFC, чтение всего RFC сильно поможет вам в реализации ваших функций.
      Самое главное — это устойчивость. Ваш сервер не должен умирать!
      Не тестируйте ваш проект только одной программой, напишите собственные тесты! Вы можете сделать это на любом языке программирования. Например: python, golang, C++, C и т.д.

    Бонусная часть

    Если основная часть неидеальная, даже не думайте про бонусы.
    Поддержка cookie и Управление сессиями (не забудьте тесты).
    Обработка нескольких CGI.

    Visit original content creator repository
    https://github.com/AmDogma/42Webserv

  • annealing-normalizing-constants

    Provable benefits of annealing for estimating normalizing constants: Importance Sampling, Noise-Contrastive Estimation, and beyond

    Code for the paper Provable benefits of annealing for estimating normalizing constants: Importance Sampling, Noise-Contrastive Estimation, and beyond.

    How to install

    From within your local repository, run

    # Create and activate an environment
    conda env create -f environment.yml
    conda activate annealed-nce
    
    # Install the package
    python setup.py develop

    How to replicate Figures 1, 2, and 3

    With current parameters, each script is parallelized over 100 CPUs and takes about 100GB of RAM and a maximum of 7 hours to replicate.

    # Evaluate how the loss, parameter distance, and dimensionality impact the estimation error
    ipython -i experiments/01_run_experiment_loss.py
    ipython -i experiments/02_run_experiment_distance.py
    ipython -i experiments/03_run_experiment_dimension.py
    
    # Plot results
    ipython -i experiments/01_plot_experiment_loss.py
    ipython -i experiments/02_plot_experiment_distance.py
    ipython -i experiments/03_plot_experiment_dimension.py

    Reference

    If you use this code in your project, please cite:

    @InProceedings{chehab2022annealingnormalizingconstant,
      title = 	 {Provable benefits of annealing for estimating normalizing constants: Importance Sampling, Noise-Contrastive Estimation, and beyond},
      author =       {Chehab, Omar and Hyv{\"a}rinen, Aapo and Risteski, Andrej},
      booktitle = 	 {Neural Information Processing Systems (NeurIPS)},
      year = 	 {2023},
    }

    Visit original content creator repository
    https://github.com/l-omar-chehab/annealing-normalizing-constants

  • LangChainAPI

                                     Apache License
                               Version 2.0, January 2004
                            http://www.apache.org/licenses/
    
       TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
    
       1. Definitions.
    
          "License" shall mean the terms and conditions for use, reproduction,
          and distribution as defined by Sections 1 through 9 of this document.
    
          "Licensor" shall mean the copyright owner or entity authorized by
          the copyright owner that is granting the License.
    
          "Legal Entity" shall mean the union of the acting entity and all
          other entities that control, are controlled by, or are under common
          control with that entity. For the purposes of this definition,
          "control" means (i) the power, direct or indirect, to cause the
          direction or management of such entity, whether by contract or
          otherwise, or (ii) ownership of fifty percent (50%) or more of the
          outstanding shares, or (iii) beneficial ownership of such entity.
    
          "You" (or "Your") shall mean an individual or Legal Entity
          exercising permissions granted by this License.
    
          "Source" form shall mean the preferred form for making modifications,
          including but not limited to software source code, documentation
          source, and configuration files.
    
          "Object" form shall mean any form resulting from mechanical
          transformation or translation of a Source form, including but
          not limited to compiled object code, generated documentation,
          and conversions to other media types.
    
          "Work" shall mean the work of authorship, whether in Source or
          Object form, made available under the License, as indicated by a
          copyright notice that is included in or attached to the work
          (an example is provided in the Appendix below).
    
          "Derivative Works" shall mean any work, whether in Source or Object
          form, that is based on (or derived from) the Work and for which the
          editorial revisions, annotations, elaborations, or other modifications
          represent, as a whole, an original work of authorship. For the purposes
          of this License, Derivative Works shall not include works that remain
          separable from, or merely link (or bind by name) to the interfaces of,
          the Work and Derivative Works thereof.
    
          "Contribution" shall mean any work of authorship, including
          the original version of the Work and any modifications or additions
          to that Work or Derivative Works thereof, that is intentionally
          submitted to Licensor for inclusion in the Work by the copyright owner
          or by an individual or Legal Entity authorized to submit on behalf of
          the copyright owner. For the purposes of this definition, "submitted"
          means any form of electronic, verbal, or written communication sent
          to the Licensor or its representatives, including but not limited to
          communication on electronic mailing lists, source code control systems,
          and issue tracking systems that are managed by, or on behalf of, the
          Licensor for the purpose of discussing and improving the Work, but
          excluding communication that is conspicuously marked or otherwise
          designated in writing by the copyright owner as "Not a Contribution."
    
          "Contributor" shall mean Licensor and any individual or Legal Entity
          on behalf of whom a Contribution has been received by Licensor and
          subsequently incorporated within the Work.
    
       2. Grant of Copyright License. Subject to the terms and conditions of
          this License, each Contributor hereby grants to You a perpetual,
          worldwide, non-exclusive, no-charge, royalty-free, irrevocable
          copyright license to reproduce, prepare Derivative Works of,
          publicly display, publicly perform, sublicense, and distribute the
          Work and such Derivative Works in Source or Object form.
    
       3. Grant of Patent License. Subject to the terms and conditions of
          this License, each Contributor hereby grants to You a perpetual,
          worldwide, non-exclusive, no-charge, royalty-free, irrevocable
          (except as stated in this section) patent license to make, have made,
          use, offer to sell, sell, import, and otherwise transfer the Work,
          where such license applies only to those patent claims licensable
          by such Contributor that are necessarily infringed by their
          Contribution(s) alone or by combination of their Contribution(s)
          with the Work to which such Contribution(s) was submitted. If You
          institute patent litigation against any entity (including a
          cross-claim or counterclaim in a lawsuit) alleging that the Work
          or a Contribution incorporated within the Work constitutes direct
          or contributory patent infringement, then any patent licenses
          granted to You under this License for that Work shall terminate
          as of the date such litigation is filed.
    
       4. Redistribution. You may reproduce and distribute copies of the
          Work or Derivative Works thereof in any medium, with or without
          modifications, and in Source or Object form, provided that You
          meet the following conditions:
    
          (a) You must give any other recipients of the Work or
              Derivative Works a copy of this License; and
    
          (b) You must cause any modified files to carry prominent notices
              stating that You changed the files; and
    
          (c) You must retain, in the Source form of any Derivative Works
              that You distribute, all copyright, patent, trademark, and
              attribution notices from the Source form of the Work,
              excluding those notices that do not pertain to any part of
              the Derivative Works; and
    
          (d) If the Work includes a "NOTICE" text file as part of its
              distribution, then any Derivative Works that You distribute must
              include a readable copy of the attribution notices contained
              within such NOTICE file, excluding those notices that do not
              pertain to any part of the Derivative Works, in at least one
              of the following places: within a NOTICE text file distributed
              as part of the Derivative Works; within the Source form or
              documentation, if provided along with the Derivative Works; or,
              within a display generated by the Derivative Works, if and
              wherever such third-party notices normally appear. The contents
              of the NOTICE file are for informational purposes only and
              do not modify the License. You may add Your own attribution
              notices within Derivative Works that You distribute, alongside
              or as an addendum to the NOTICE text from the Work, provided
              that such additional attribution notices cannot be construed
              as modifying the License.
    
          You may add Your own copyright statement to Your modifications and
          may provide additional or different license terms and conditions
          for use, reproduction, or distribution of Your modifications, or
          for any such Derivative Works as a whole, provided Your use,
          reproduction, and distribution of the Work otherwise complies with
          the conditions stated in this License.
    
       5. Submission of Contributions. Unless You explicitly state otherwise,
          any Contribution intentionally submitted for inclusion in the Work
          by You to the Licensor shall be under the terms and conditions of
          this License, without any additional terms or conditions.
          Notwithstanding the above, nothing herein shall supersede or modify
          the terms of any separate license agreement you may have executed
          with Licensor regarding such Contributions.
    
       6. Trademarks. This License does not grant permission to use the trade
          names, trademarks, service marks, or product names of the Licensor,
          except as required for reasonable and customary use in describing the
          origin of the Work and reproducing the content of the NOTICE file.
    
       7. Disclaimer of Warranty. Unless required by applicable law or
          agreed to in writing, Licensor provides the Work (and each
          Contributor provides its Contributions) on an "AS IS" BASIS,
          WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
          implied, including, without limitation, any warranties or conditions
          of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
          PARTICULAR PURPOSE. You are solely responsible for determining the
          appropriateness of using or redistributing the Work and assume any
          risks associated with Your exercise of permissions under this License.
    
       8. Limitation of Liability. In no event and under no legal theory,
          whether in tort (including negligence), contract, or otherwise,
          unless required by applicable law (such as deliberate and grossly
          negligent acts) or agreed to in writing, shall any Contributor be
          liable to You for damages, including any direct, indirect, special,
          incidental, or consequential damages of any character arising as a
          result of this License or out of the use or inability to use the
          Work (including but not limited to damages for loss of goodwill,
          work stoppage, computer failure or malfunction, or any and all
          other commercial damages or losses), even if such Contributor
          has been advised of the possibility of such damages.
    
       9. Accepting Warranty or Additional Liability. While redistributing
          the Work or Derivative Works thereof, You may choose to offer,
          and charge a fee for, acceptance of support, warranty, indemnity,
          or other liability obligations and/or rights consistent with this
          License. However, in accepting such obligations, You may act only
          on Your own behalf and on Your sole responsibility, not on behalf
          of any other Contributor, and only if You agree to indemnify,
          defend, and hold each Contributor harmless for any liability
          incurred by, or claims asserted against, such Contributor by reason
          of your accepting any such warranty or additional liability.
    
       END OF TERMS AND CONDITIONS
    
       APPENDIX: How to apply the Apache License to your work.
    
          To apply the Apache License to your work, attach the following
          boilerplate notice, with the fields enclosed by brackets "[]"
          replaced with your own identifying information. (Don't include
          the brackets!)  The text should be enclosed in the appropriate
          comment syntax for the file format. We also recommend that a
          file or class name and description of purpose be included on the
          same "printed page" as the copyright notice for easier
          identification within third-party archives.
    
       Copyright [yyyy] [name of copyright owner]
    
       Licensed under the Apache License, Version 2.0 (the "License");
       you may not use this file except in compliance with the License.
       You may obtain a copy of the License at
    
           http://www.apache.org/licenses/LICENSE-2.0
    
       Unless required by applicable law or agreed to in writing, software
       distributed under the License is distributed on an "AS IS" BASIS,
       WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
       See the License for the specific language governing permissions and
       limitations under the License.
    

    Visit original content creator repository
    https://github.com/seymasa/LangChainAPI

  • CodingCompetition

    Visit original content creator repository
    https://github.com/arasgungore/CodingCompetition

  • shearconnector

    Shear Connector Database

    What is inside

    This repository stores 551 push-out test data for shear connectors installed in steel deck.

    We save the data as a json file at here in the data folder.

    This repository comes with a data viewer. You can use the viewer at https://github.com/hyeyoungkoh/shearstud.

    Or you clone this repository to run locally by running the following command

    git clone https://github.com/hyeyoungkoh/shearstud.git
    cd shearstud/
    gatsby develop
    

    Visit the data viewer at http://localhost:8000

    References

    • M. Lawson, E. Aggelopoulos, R. Obiala, F. Hanus, C. Odenbreit, S. Nellinger, U. Kuhlmann, F. Eggert, D. Lam, X. Dai and T. Sheehan, “Development of improved shear connection rules in composite beams – Final Report,” 2017.
    • S. Nellinger, “On the behaviour of shear stud connections in composite beams with deep decking. Ph.D. Dissertation,” Luxembourg, 2015.
    • V. Vigneri, “Load bearing mechanisms of headed stud shear connections in profiled steel sheeting transverse to the beam. Ph.D. Dissertation,” University of Luexmbourg, Luxembourg, 2021.
    • M. D. Rambo-Roddenberry, “Behavior and strength of welded stud shear connectors. Ph.D. Dissertation,” Blacksburg, 2002.
    • S. Ernst, “Factors affecting the behaviour of the shear connection of steel-concrete composite beams. Ph.D. Dissertation,” University of Western Sydney, 2006.
    • A. L. Smith and G. H. Couchman, “Strength and ductility of headed stud shear connectors in profiled steel sheeting,” Journal of Constructional Steel Research, vol. 66, pp. 748-754, 2010.
    • K. Roik and K. E. Bürkner, “Beitrag zur Tragfähigkeit von Kopfbolzendübeln in Verbundträgern mit Stahlprofilblechen,” Bauingenieur, vol. 56, no. 3, 1981.
    • K. Roik and K. E. Bürkner, “Untersuchungen des Trägerverbundes unter Verwendung von Stahltrapezprofilen mit einer Höhe> 80 mm,” Studiengesellschaft für Anwendungstechnik von Eisen und Stahl e. V. – Projekt 40, Bochum, 1980.
    • K. Roik and H. Lungershausen, “Verbundträger mit Stahltrapezprofilblechen mit Rippenhöhen > 80 mm,” Studiengesellschaft für Anwendungstechnik von Eisen und Stahl e.V, 1988.
    • M. Konrad, “Tragverhalten von Kopfbolzen in Verbundträgern bei senkrecht spannenden Trapezprofilblechen”, Stuttgart Institut für Konstruktion und Entwurf, Stahl- Holz- und Verbundbau. Ph.D. Dissertation,” Universität Stuttgart, 2011.
    • H. Bode and R. Künzel, “Anwendung der Durchschweißtechnik im Verbundbau. 3. überarbeitete Auflage – Forschungsbericht,” Kaiserslautern, 1999.
    • H. Bode and R. Künzel, “Steifigkeit und Verformbarkeit von Verbundmittelnim Hochbau,” in International symposium – Composite steel concrete structures, Bratislava, 1987.
    • K. Cashell and N. Baddoo, “Experimental assessment of ferritic stainless steel composite slabs,” in International Conference on Composite Construction in Steel and Concrete 2013, North Queensland, 2013.
    • H. Yuan, “The resistance of stud shear connectors with profiled sheeting. Ph.D. Dissertation,” University of Warwick, 1996.
    • S. J. Hicks, “Strength and Ductility of Headed Stud Connectors Welded in Modern Profiled Steel Sheeting,” Structural Engineering International, vol. 4, pp. 415-419, 2009.
    • C. N. Sublett, “Strength of welded headed studs in ribbed metal deck on composite joists,” Blacksburg, 1991.
    • J. C. Lyons, “Strength of welded shear studs,” Blacksburg, 1994.
    • J. T. Mottram and R. P. Johnson, “Push tests on studs welded through profiled steel sheeting,” The Structural Engineer, 1990.
    • B. S. Jayas and M. U. Hosain, “Behaviour of headed studs in composite beams – push-out tests,” Canadian Journal of Civil Engineering, 1987.
    • S. J. Hicks, “Resistance and ductility of shear connection – Full-scale beam and push tests,” in Proceedings of the sixth International Conference on Steel and Aluminium Structures, 2007.
    • S. J. Hicks, “Longitudinal shear resistance of steel and concrete composite beams. Ph.D. Dissertation,” Cambridge, 1997.
    • H. Robinson, “Multiple stud shear connections in deep ribbed metal deck,” Canadian Journal of Civil Engineering, 1988.
    • S. J. Hicks and A. L. Smith, “Stud Shear Connectors in Composite Beams that support Slabs with Profiled Steel Sheeting,” Structural Engineering International, vol. 24, no. 2, pp. 246-253, 2014.
    • R. Lloyd and H. D. Wright, “Shear connection between composite slabs and steel beams,” Journal of constructional steel research, vol. 15, no. 4, pp. 255-285, 1990.
    • M. H. Shen and K. F. Chung, “Structural behaviour of stud shear connections with solid and composite slabs under co-existing shear and tension forces,” Structures Vol. 9, 2017.
    • M. J. Russell, G. C. Clifton and J. B. Lim, “Vertical and horizontal push tests on specimens with a Trefoil decking profile,” Structures,, vol. 29, pp. 1096-1110, 2021.
    • S. J. Hicks, Private communication, University of Warwick, January 2020.

    Visit original content creator repository
    https://github.com/hyeyoungkoh/shearconnector

  • pf-calendar-events-data

    <pf-calendar-events-data>

    pf-elements

    A Polymer 2.0 based collection of reusable web components Join the chat at https://gitter.im/pf-elements/Lobby

    pf-calendar-events-data

    An Advanced Polymer 2.0 based custom elements to get the calendar events, appointments, meetings data from Firebase. To be used in conjunction with pf-calendar-events element.

    Firebase based headless Polymer 2.0 Element. This element has no UI, and is used for CRUD operations for calendar events. It exposes the API for Add/Update/Delete events and manages that operations on Firebase.

    Use this element in conjunction with other pf-calendar UI element or make your own front-end element and use this element to take care of backend firebase integration.

    Element Name Latest Version (Bower) Npm version Build Status
    pf-calendar-events-data GitHub version npm version Build Status

    Published on webcomponents.org

    Demo

    Click here for docs & demo

    Install the Polymer-CLI

    First, make sure you have the Polymer CLI installed. Then run polymer serve to serve your application locally.

    Learn more

    See the list of elements, demos, and documentation by browsing this collection on webcomponents.org:

    Methods

    The following methods are available for crude events operation:

    Methods Description
    addEvent(event) Take event object and add as a new Event, meeting or reminder into firebase
    updateEvent(key,event) Take firebase data ref key and updated event object , update the given ref key node
    deleteEvent(key) Take record ref key and delete that event

    Example

         <firebase-app
                      name="pf-calendar-firebase"
                      api-key="AIzaSyBOML3Qc_rtqDeVAr2ous6Z8-E1FDqH4CI"
                      auth-domain="pf-calendar-firebase.firebaseapp.com"
                      database-url="https://pf-calendar-firebase.firebaseio.com">
              </firebase-app>
              <pf-calendar-events-data
                      databasename="pf-calendar-firebase"
                      databasepath="testdata"
                      eventsData="{{results}}"
                      filterAttr="color"
                      filterValue="green"></pf-calendar-events-data>

    -> Replace firebase-app with yours

    Viewing Your Application

    $ polymer serve
    

    Building Your Application

    $ polymer build
    

    This will create a build/ folder with bundled/ and unbundled/ sub-folders containing a bundled (Vulcanized) and unbundled builds, both run through HTML, CSS, and JS optimizers.

    You can serve the built versions by giving polymer serve a folder to serve from:

    $ polymer serve build/bundled
    

    Running Tests

    $ polymer test
    

    Your application is already set up to be tested via web-component-tester. Run polymer test to run your application’s test suite locally.

    Contributing

    Comments, questions, suggestions, issues, and pull requests are all welcome.

    Get in touch with the team

    Joing us at Join the chat at https://gitter.im/pf-elements/Lobby

    Some ways to help:

    • Test the elements and provide feedback: We would love to hear your feedback on anything related to the elements, like features, API and design. The best way to start is by trying them out. And to get a quick response, either drop a question/comment on the chat or open an issue in GitHub.
    • Report bugs: File issues for the elements in their respective GitHub projects.
    • Send pull requests: If you want to contribute code, check out the development instructions below.

    We encourage you to read the contribution instructions by GitHub also.

    License

    MIT License

    Visit original content creator repository https://github.com/PFElements/pf-calendar-events-data
  • pf-calendar-events-data

    <pf-calendar-events-data>

    pf-elements

    A Polymer 2.0 based collection of reusable web components Join the chat at https://gitter.im/pf-elements/Lobby

    pf-calendar-events-data

    An Advanced Polymer 2.0 based custom elements to get the calendar events, appointments, meetings data from Firebase. To be used in conjunction with pf-calendar-events element.

    Firebase based headless Polymer 2.0 Element. This element has no UI, and is used for CRUD operations for calendar events. It exposes the API for Add/Update/Delete events and manages that operations on Firebase.

    Use this element in conjunction with other pf-calendar UI element or make your own front-end element and use this element to take care of backend firebase integration.

    Element Name Latest Version (Bower) Npm version Build Status
    pf-calendar-events-data GitHub version npm version Build Status

    Published on webcomponents.org

    Demo

    Click here for docs & demo

    Install the Polymer-CLI

    First, make sure you have the Polymer CLI installed. Then run polymer serve to serve your application locally.

    Learn more

    See the list of elements, demos, and documentation by browsing this collection on webcomponents.org:

    Methods

    The following methods are available for crude events operation:

    Methods Description
    addEvent(event) Take event object and add as a new Event, meeting or reminder into firebase
    updateEvent(key,event) Take firebase data ref key and updated event object , update the given ref key node
    deleteEvent(key) Take record ref key and delete that event

    Example

         <firebase-app
                      name="pf-calendar-firebase"
                      api-key="AIzaSyBOML3Qc_rtqDeVAr2ous6Z8-E1FDqH4CI"
                      auth-domain="pf-calendar-firebase.firebaseapp.com"
                      database-url="https://pf-calendar-firebase.firebaseio.com">
              </firebase-app>
              <pf-calendar-events-data
                      databasename="pf-calendar-firebase"
                      databasepath="testdata"
                      eventsData="{{results}}"
                      filterAttr="color"
                      filterValue="green"></pf-calendar-events-data>

    -> Replace firebase-app with yours

    Viewing Your Application

    $ polymer serve
    

    Building Your Application

    $ polymer build
    

    This will create a build/ folder with bundled/ and unbundled/ sub-folders containing a bundled (Vulcanized) and unbundled builds, both run through HTML, CSS, and JS optimizers.

    You can serve the built versions by giving polymer serve a folder to serve from:

    $ polymer serve build/bundled
    

    Running Tests

    $ polymer test
    

    Your application is already set up to be tested via web-component-tester. Run polymer test to run your application’s test suite locally.

    Contributing

    Comments, questions, suggestions, issues, and pull requests are all welcome.

    Get in touch with the team

    Joing us at Join the chat at https://gitter.im/pf-elements/Lobby

    Some ways to help:

    • Test the elements and provide feedback: We would love to hear your feedback on anything related to the elements, like features, API and design. The best way to start is by trying them out. And to get a quick response, either drop a question/comment on the chat or open an issue in GitHub.
    • Report bugs: File issues for the elements in their respective GitHub projects.
    • Send pull requests: If you want to contribute code, check out the development instructions below.

    We encourage you to read the contribution instructions by GitHub also.

    License

    MIT License

    Visit original content creator repository https://github.com/PFElements/pf-calendar-events-data