I need help in configuring my Vs code to run scripts in python using Cntrl Shift B, I was working fine until Vs code upgraded to version 2.0.0 now it wants me to configure the Build. And I am clueless what Build is all about.
In the past it worked well when I only needed to configure the task runner. There are youtube videos for the task runner. I cant seem to lay my finger on what the Build is all about.
In VS Code go Tasks -> Configure Tasks
{
// See https://go.microsoft.com/fwlink/?LinkId=733558
// for the documentation about the tasks.json format
"version": "2.0.0",
"tasks": [
{
"taskName": "Run File",
"command": "python ${file}",
"type": "shell",
"group": {
"kind": "build",
"isDefault": true
},
"presentation": {
"reveal": "always",
"panel": "new",
"focus": true
}
},
{
"taskName": "nosetest",
"command": "nosetests -v",
"type": "shell",
"group": {
"kind": "test",
"isDefault": true
},
"presentation": {
"reveal": "always",
"panel": "new",
"focus": true
}
}
]
}
command: runs current python file
group: 'build'
presentation:
always shows the shell when run
uses a new shell
focuses the shell (i.e. keyboard is captured in shell window)
2nd Task is configured as the default test and just runs nosetest -v in the folder that's currently open in VS Code.
The "Run Build Task" (the one that's bound to Ctrl+Shift+B) is the one that's configured as the default build task, task 1 in this example (see the group entry).
EDIT:
Suggested by #RafaelZayas in the comments (this will use the Python interpreter that's specified in VS Code's settings rather than the system default; see his comment for more info):
"command": "${command:python.interpreterPath} ${file}"
...Don't have enough reputation to comment on the accepted answer...
At least in my environment (Ubuntu 18.04, w/ virtual env) if arguments are being passed in with "args", the file must be the first argument, as #VladBezden is doing, and not part of the command as #orangeInk is doing. Otherwise I get the message "No such file or directory".
Specifically, the answer #VladBezden has does work for me, where the following does not.
{
// See https://go.microsoft.com/fwlink/?LinkId=733558
// for the documentation about the tasks.json format
"version": "2.0.0",
"tasks": [
{
"label": "build",
"command": "${config:python.pythonPath} setup.py", // WRONG, move setup.py to args
"group": {
"kind": "build",
"isDefault": true
},
"args": [
"install"
],
"presentation": {
"echo": true,
"panel": "shared",
"focus": true
}
}
]
}
This took me a while to figure out, so I thought I would share.
Here is my configuration for the build (Ctrl+Shift+B)
tasks.json
{
// See https://go.microsoft.com/fwlink/?LinkId=733558
// for the documentation about the tasks.json format
"version": "2.0.0",
"tasks": [
{
"label": "build",
"command": "python",
"group": {
"kind": "build",
"isDefault": true
},
"args": [
"setup.py",
"install"
],
"presentation": {
"echo": true,
"panel": "shared",
"focus": true
}
}
]
}
Related
I am trying to create a custom analyzer with elastic search python client. I'm referring to this article in elastic search documentation.
elastic docs article
When I send a PUT request with the following JSON settings it sends 200 Success.
PUT my-index-000001
{
"settings": {
"analysis": {
"analyzer": {
"my_custom_analyzer": {
"char_filter": [
"emoticons"
],
"tokenizer": "punctuation",
"filter": [
"lowercase",
"english_stop"
]
}
},
"tokenizer": {
"punctuation": {
"type": "pattern",
"pattern": "[ .,!?]"
}
},
"char_filter": {
"emoticons": {
"type": "mapping",
"mappings": [
":) => _happy_",
":( => _sad_"
]
}
},
"filter": {
"english_stop": {
"type": "stop",
"stopwords": "_english_"
}
}
}
}
}
The issue comes when I try to do the same with the python client. Here's how I am using it.
settings.py to define settings
settings = {
"settings": {
"analysis": {
"analyzer": {
"my_custom_analyzer": {
"char_filter": [
"emoticons"
],
"tokenizer": "punctuation",
"filter": [
"lowercase",
"english_stop"
]
}
},
"tokenizer": {
"punctuation": {
"type": "pattern",
"pattern": "[ .,!?]"
}
},
"char_filter": {
"emoticons": {
"type": "mapping",
"mappings": [
":) => _happy_",
":( => _sad_"
]
}
},
"filter": {
"english_stop": {
"type": "stop",
"stopwords": "_english_"
}
}
}
}
}
create-index helper method
es_connection.create_index(index_name="test", mapping=mapping, settings=settings)
es-client call
def create_index(self, index_name: str, mapping: Dict, settings) -> None:
"""
Create an ES index.
:param index_name: Name of the index.
:param mapping: Mapping of the index
"""
logging.info(f"Creating index {index_name} with the following schema: {json.dumps(mapping, indent=2)}")
self.es_client.indices.create(index=index_name, ignore=400, mappings=mapping, settings=settings)
I get the following error from logs
{"error":{"root_cause":[{"type":"illegal_argument_exception","reason":"unknown setting [index.settings.analysis.analyzer.my_custom_analyzer.char_filter] please check that any required plugins are installed, or check the breaking changes documentation for removed settings"}],"type":"illegal_argument_exception","reason":"unknown setting [index.settings.analysis.analyzer.my_custom_analyzer.char_filter] please check that any required plugins are installed, or check the breaking changes documentation for removed settings","suppressed":[{"type":"illegal_argument_exception","reason":"unknown setting [index.settings.analysis.analyzer.my_custom_analyzer.filter] please check that any required plugins are installed, or check the breaking changes documentation for removed settings"},{"type":"illegal_argument_exception","reason":"unknown setting [index.settings.analysis.analyzer.my_custom_analyzer.tokenizer] please check that any required plugins are installed, or check the breaking changes documentation for removed settings"},{"type":"illegal_argument_exception","reason":"unknown setting [index.settings.analysis.char_filter.emoticons.mappings] please check that any required plugins are installed, or check the breaking changes documentation for removed settings"},{"type":"illegal_argument_exception","reason":"unknown setting [index.settings.analysis.char_filter.emoticons.type] please check that any required plugins are installed, or check the breaking changes documentation for removed settings"},{"type":"illegal_argument_exception","reason":"unknown setting [index.settings.analysis.filter.english_stop.stopwords] please check that any required plugins are installed, or check the breaking changes documentation for removed settings"},{"type":"illegal_argument_exception","reason":"unknown setting [index.settings.analysis.filter.english_stop.type] please check that any required plugins are installed, or check the breaking changes documentation for removed settings"},{"type":"illegal_argument_exception","reason":"unknown setting [index.settings.analysis.tokenizer.punctuation.pattern] please check that any required plugins are installed, or check the breaking changes documentation for removed settings"},{"type":"illegal_argument_exception","reason":"unknown setting [index.settings.analysis.tokenizer.punctuation.type] please check that any required plugins are installed, or check the breaking changes documentation for removed settings"}]},"status":400}
Any idea what causes this issue ??? Related to ignore 400 ???? Thanks in advance.
PS - I'm using docker.elastic.co/elasticsearch/elasticsearch:7.15.1 and python elasticsearch client 7.15.1
You simply need to remove the settings section at the top because it's added automatically by the client code:
settings = {
"settings": { <--- remove this line
"analysis": {
"analyzer": {
Yesterday I could run and debug my Azure Function project with VS Code. Today, when I'm hitting F5 ("Start Debugging"), a pop-up instantly appears with
connect ECONNREFUSED 127.0.0.1:9091
I believe it's a network issue, since VS Code doesn't even open a terminal displaying "Executing task [...]".
But within VS Code I can browse the extension marketplace, run "pip install" within the terminal.
Here is my configuration:
launch.json
{
"version": "0.2.0",
"configurations": [
{
"name": "Python: Current file",
"type": "python",
"request": "launch",
"program": "${file}",
"console": "integratedTerminal"
},
{
"name": "Azure Functions",
"type": "python",
"request": "attach",
"port": 9091,
"preLaunchTask": "func: host start"
}
]
}
tasks.json
{
"version": "2.0.0",
"tasks": [
{
"type": "func",
"command": "host start",
"problemMatcher": "$func-python-watch",
"isBackground": true,
"dependsOn": "pipInstall",
"options": {
"cwd": "${config:azureFunctions.projectSubpath}"
}
},
{
"label": "pipInstall",
"type": "shell",
"command": "${config:python.pythonPath} -m pip install -r requirements.txt",
"problemMatcher": [],
"options": {
"cwd": "${config:azureFunctions.projectSubpath}"
}
}
]
}
settings.json
{
"azureFunctions.deploySubpath": "azure-functions\\my_project",
"azureFunctions.projectSubpath": "azure-functions\\my_project",
"azureFunctions.scmDoBuildDuringDeployment": true,
"azureFunctions.pythonVenv": ".venv",
"azureFunctions.projectLanguage": "Python",
"azureFunctions.projectRuntime": "~3",
"debug.internalConsoleOptions": "neverOpen",
"python.pythonPath": "C:\\Users\\myself\\BigRepo\\azure-functions\\my_project\\.venv\\Scripts\\python.exe",
"powershell.codeFormatting.addWhitespaceAroundPipe": true
//"terminal.integrated.shell.windows": "&&"
}
This is so weird. I edited my launch.json back and forth, sometimes it was a valid one, sometimes it was not, and now when I'm saving my original launch.json, the configuration works.
This configuration works fine:
{
"version": "0.2.0",
"configurations": [
{
"name": "Python: Current file",
"type": "python",
"request": "launch",
"program": "${file}",
"console": "integratedTerminal"
},
{
"name": "Azure Functions",
"type": "python",
"request": "attach",
"port": 9091,
"preLaunchTask": "func: host start"
}
]
}
I'm not sure I understand what is going on.
Re-install everything.
It works.
I have a Python Azure function that triggers based on messages to a topic, which works fine independently. However, if I then try to also write a message to a different ServiceBus Queue it doesn't work (as in the Azure Function won't even trigger if new messages are published to the topic). Feels like the trigger conditions aren't met when I include the msg_out: func.Out[str] component. Any help would be much appreciated!
__init.py
import logging
import azure.functions as func
def main(msg: func.ServiceBusMessage, msg_out: func.Out[str]):
# Log the Service Bus Message as plaintext
# logging.info("Python ServiceBus topic trigger processed message.")
logging.info("Changes are coming through!")
msg_out.set("Send an email")
function.json
{
"scriptFile": "__init__.py",
"entryPoint": "main",
"bindings": [
{
"name": "msg",
"type": "serviceBusTrigger",
"direction": "in",
"topicName": "publish-email",
"subscriptionName": "validation-sub",
"connection": "Test_SERVICEBUS"
},
{
"type": "serviceBus",
"direction": "out",
"connection": "Test_SERVICEBUS",
"name": "msg_out",
"queueName": "email-test"
}
]
}
host.json
{
"version": "2.0",
"logging": {
"applicationInsights": {
"samplingSettings": {
"isEnabled": true,
"excludedTypes": "Request"
}
}
},
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle",
"version": "[2.*, 3.0.0)"
},
"extensions": {
"serviceBus": {
"prefetchCount": 100,
"messageHandlerOptions": {
"autoComplete": true,
"maxConcurrentCalls": 32,
"maxAutoRenewDuration": "00:05:00"
},
"sessionHandlerOptions": {
"autoComplete": false,
"messageWaitTimeout": "00:00:30",
"maxAutoRenewDuration": "00:55:00",
"maxConcurrentSessions": 16
}
}
}
}
I can reproduce your problem, it seems to be caused by the following error:
Property sessionHandlerOptions is not allowed.
Deleting sessionHandlerOptions can be triggered normally.
I'm looking for a way to set service/status/loadBalance/ingress-ip after creating k8s service of type=loadbalancer (as appears in 'Type LoadBalancer' section at the next link https://kubernetes.io/docs/concepts/services-networking/service/ ).
My problem is similiar to the issue described in following link (Is it possible to update a kubernetes service 'External IP' while watching for the service? ) but couldn't find the answer.
Thanks in advance
There's two ways to do this. With a json patch or with a merge patch. Here's how you do the latter:
[centos#ost-controller ~]$ cat patch.json
{
"status": {
"loadBalancer": {
"ingress": [
{"ip": "8.3.2.1"}
]
}
}
}
Now, here you can see the for merge patches, you have to make a dictionary containing all the Object tree (begins at status) that will need some change to be merged. If you wanted to replace something, then you'd have to use the json patch strategy.
Once we have this file we send the request and if all goes well, we'll receive a response consisting on the object with the merge already applied:
[centos#ost-controller ~]$ curl --request PATCH --data "$(cat patch.json)" -H "Content-Type:application/merge-patch+json" http://localhost:8080/api/v1/namespaces/default/services/kubernetes/status{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "kubernetes",
"namespace": "default",
"selfLink": "/api/v1/namespaces/default/services/kubernetes/status",
"uid": "b8ece320-76c1-11e7-b468-fa163ea3fb09",
"resourceVersion": "2142242",
"creationTimestamp": "2017-08-01T14:00:06Z",
"labels": {
"component": "apiserver",
"provider": "kubernetes"
}
},
"spec": {
"ports": [
{
"name": "https",
"protocol": "TCP",
"port": 443,
"targetPort": 6443
}
],
"clusterIP": "10.0.0.129",
"type": "ClusterIP",
"sessionAffinity": "ClientIP"
},
"status": {
"loadBalancer": {
"ingress": [
{
"ip": "8.3.2.1"
}
]
}
}
I have the following startup script variable defined in my python script:
default_startup_script = """
#! /bin/bash
cd ~/git/gcloud;
git config --global user.email "my.email#gmail.com";
git config --global user.name "my.name";
git stash;
git pull https://user:pw#bitbucket.org/url/my_repo.git;
"""
and the following config:
config = {
"name": "instance-bfb6559d-788f-48b7-85a3-8ff3ab6e5a60",
"zone": "projects/username-165421/zones/us-east1-b",
"machineType": "projects/username-165421/zones/us-east1-b/machineTypes/f1-micro",
"metadata": {
"items": [{'key':'startup-script','value':default_startup_script}]
},
"tags": {
"items": [
"http-server",
"https-server"
]
},
"disks": [
{
"type": "PERSISTENT",
"boot": True,
"mode": "READ_WRITE",
"autoDelete": True,
"deviceName": "instance-4",
"initializeParams": {
"sourceImage": "projects/username-165421/global/images/image-id",
"diskType": "projects/username-165421/zones/us-east1-b/diskTypes/pd-standard",
"diskSizeGb": "10"
}
}
],
"canIpForward": False,
"networkInterfaces": [
{
"network": "projects/username-165421/global/networks/default",
"subnetwork": "projects/username-165421/regions/us-east1/subnetworks/default",
"accessConfigs": [
{
"name": "External NAT",
"type": "ONE_TO_ONE_NAT"
}
]
}
],
"description": "",
"labels": {},
"scheduling": {
"preemptible": False,
"onHostMaintenance": "MIGRATE",
"automaticRestart": True
},
"serviceAccounts": [
{
"email": "123456-compute#developer.gserviceaccount.com",
"scopes": [
"https://www.googleapis.com/auth/devstorage.read_only",
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring.write",
"https://www.googleapis.com/auth/servicecontrol",
"https://www.googleapis.com/auth/service.management.readonly",
"https://www.googleapis.com/auth/trace.append"
]
}
]
}
Now - the instance creates without issue, but the startup script does not fire.
I am creating the instance by running:
compute.instances().insert(
project=project,
zone=zone,
body=config).execute()
All of the samples were retrieved from here.
Once the instance is created and I paste my startup script manually it works without issue.
Does anyone have any idea what I am doing wrong here?
This works. My issue was related to user accounts. I was not logging in as the default user (Eg: username#instance-id).
If you are reading this question just be sure of which username you are intending to run this for and manage accordingly.