Saleor front-end installation - python

I am trying to install saleor front-end package from github.The documentation is outdated and i get an error when i try
>>>nmp start
Error: Environment variable API_URI not set
I found this variable in different places but did not know what to change, and where to set it
EDIT:Solved.just in case somebody is going through the same problem
in webpack>config.base.js
process.env.API_URI = 'http://localhost:8000/graphql/'

On Linux, I fixed this by setting environment variable before running npm; start with:
export API_URI=http://localhost:8000/graphql/
on the terminal.

create a file in the root directory of /saleor-storefront called ".env" and write inside:
API_URI=http://localhost:8000/graphql/
This will create an environment variable called API_URI with the value 'http://localhost:8000/graphql/'

Create .env file in a root directory or set environment variables with following values:
API_URI (required) - URI of a running instance of Saleor GraphQL API. If you are running Saleor locally with the default settings, set API_URI to: http://localhost:8000/graphql/.
APP_MOUNT_URI - URI at which the Dashboard app will be mounted. E.g. If you set APP_MOUNT_URI to /dashboard/, your app will be mounted at http://localhost:9000/dashboard/.
STATIC_URL - URL where the static files are located. E.g. if you use S3 bucket, you should set it to the bucket's URL. By default Saleor assumes you serve static files from the root of your site at http://localhost:9000/.
Saleor on Github: How to configure the Dashboard

Related

Pydantic validation error for BaseSettings model with local ENV file

I'm developing a simple FastAPI app and I'm using Pydantic for storing app settings.
Some settings are populated from the environment variables set by Ansible deployment tools but some other settings are needed to be set explicitly from a separate env file.
So I have this in config.py
class Settings(BaseSettings):
# Project wide settings
PROJECT_MODE: str = getenv("PROJECT_MODE", "sandbox")
VERSION: str
class Config:
env_file = "config.txt"
And I have this config.txt
VERSION="0.0.1"
So project_mode env var is being set by deployment script and version is being set from the env file. The reason for that is that we'd like to keep deployment script similar across all projects, so any custom vars are populated from the project specific env files.
But the problem is that when I run the app, it fails with:
pydantic.error_wrappers.ValidationError: 1 validation error for Settings
VERSION
field required (type=value_error.missing)
So how can I populate Pydantic settings model from the local ENV file?
If the environment file isn't being picked up, most of the time it's because it isn't placed in the current working directory. In your application in needs to be in the directory where the application is run from (or if the application manages the CWD itself, to where it expects to find it).
In particular when running tests this can be a bit confusing, and you might have to configure your IDE to run tests with the CWD set to the project root if you run the tests from your IDE.
The path of env_file is relative to the current working directory, which confused me as well. In order to always use a path relative to the config module I set it up like this:
env_file: f"{pathlib.Path(__file__).resolve().parent}/config.txt"

How to declare extra modules (static files) in python dash app hosted on AWS Elastic Beanstalk

I am trying to run a python dash app on AWS Elastic Beanstalk service but it is missing the css, and the folder in which I have csv's as data to give results.
I could only declare one static folders but my directory looks like this.
My folder structure:
You can use a configuration file to configure static file paths and directory locations using configuration options. You can add a configuration file to your application's source bundle and deploy it during environment creation or a later deployment.
If your environment uses a platform branch based on Amazon Linux 2, use the aws:elasticbeanstalk:environment:proxy:staticfiles namespace.
The following example configuration file tells the proxy server to serve files in the statichtml folder at the path /html, and files in the staticimages folder at the path /images.
Example .ebextensions/static-files.config
option_settings:
aws:elasticbeanstalk:environment:proxy:staticfiles:
/html: statichtml
/images: staticimages
More information can be found here: Elastic BeanStalk - Serving static files.

Where do I set environment variables on my Django Digital Ocean server?

I'm running my Django project on my Ubuntu 16.04 Digital Ocean server running Gunicorn/Nginx. I have my whole project except my settings.py file so am looking do add that in now - however don't want to hardcode the SECRET_KEY - so am looking to define an environment variable like it says in the Django docs: SECRET_KEY = os.environ['SECRET_KEY'].
Where do I define this variable? Is it in my gunicorn config file (/etc/systemd/system/gunicorn.service)
You can create environmental variables inside your .bashrc file in your home folder.
Just open the .bashrc file from home folder
sudo vi ~/.bashrc
And then at the end of the file, add your variable
export SECRET_KEY='your secret key'
then save it, and try running source command on the file so as to enable the variable(So that it gets applied without restarting the system)
source ~/.bashrc

Elastic Beanstalk doesn't accept my changes to WSGIPath

I have an app where I want to run it from aws_wsgi.py instead of application.py, as there are several different entry points depending on where we are hosting it. For this reason, I would like to be able to change the WSGIPath variable to point to the correct location.
This, in an .ebextensions .config file, does not work:
option_settings:
- namespace: "aws:elasticbeanstalk:container:python"
option_name: WSGIPath
value: "/opt/python/current/app/aws_wsgi.py"
The environment attempts to use 'application.py' despite the above lines. No error appears to be emitted. Other parts of that same config file work perfectly, such as packages commands to get the system to install some yum packages. I can confirm the config files are getting uploaded in the logs:
INFO: Creating new application version using project code
WARNING: You have uncommitted changes.
INFO: Getting version label from git with git-describe
Creating application version archive "0_3_0-507-ga36f".
INFO: creating zip using git archive HEAD
INFO: git archive output: .ebextensions/
.ebextensions/01-weave_server_eb.config
.ebextensions/02-weave_server_eb_lxml_dependencies.config
.ebextensions/03-weave_server_eb_nltk_data.config
.ebextensions/04-weave_server_eb_entity_data.config
.ebextensions/05-weave_server_eb_geography_data.config
.gitattributes
.gitignore
...etc.
We run with a saved configuration, i.e. via eb create --cfg Live, and in the dashboard, that configuration shows that WSGIPath is "application.py". But there is nowhere to change that value in the dashboard. It seems like it is a built-in value that overrides the data we send with the above config file.
I tried adding it as an environment variable via the dashboard, but that goes in the aws:elasticbeanstalk:application:environment namespace and does not affect how the application is started up in the first place. (I checked this by using eb config to download the configuration.)
Maybe I could add a section in the file retrieved by eb config, but I heard that doing that will start to override .ebextensions files, and I have several important commands in my .ebextensions files that I need to continue using. (But see the comment below.) It's not clear from the documentation how .ebextensions data translates to and compares with config files used by eb config but the .ebextensions files are well-documented and reasonably convenient so I'd rather not break those if possible!
If I retrieve the configuration on the server via eb config get Live, it contains the following (numerous API keys removed):
EnvironmentConfigurationMetadata:
Description: Includes API keys for live operation
DateModified: '1437734273000'
DateCreated: '1437734273000'
AWSConfigurationTemplateVersion: 1.1.0.0
EnvironmentTier:
Name: WebServer
Type: Standard
SolutionStack: 64bit Amazon Linux 2015.03 v1.4.3 running Python 2.7
OptionSettings:
aws:elb:loadbalancer:
CrossZone: true
aws:elasticbeanstalk:command:
BatchSize: '30'
BatchSizeType: Percentage
aws:autoscaling:launchconfiguration:
IamInstanceProfile: aws-elasticbeanstalk-ec2-role
EC2KeyName: aws-eb
InstanceType: t2.micro
aws:elb:policies:
ConnectionDrainingEnabled: true
aws:autoscaling:updatepolicy:rollingupdate:
RollingUpdateType: Health
RollingUpdateEnabled: true
aws:elasticbeanstalk:application:environment:
DATA_DIR: /opt/python/current/app/data
WSGIPath: /opt/python/current/aws_wsgi.py
aws:elb:healthcheck:
Interval: '30'
(NB. The WSGIPath environment variable there is invalid - but I am unable to remove it from the configuration due to bugs in the AWS Dashboard. It appears to have no effect anyway.)
How do I get AWS to respect my chosen WSGIPath?

Setting NewRelic environment on Dotcloud (Python)

I have a Python application that is set up using the new New Relic configuration variables in the dotcloud.yml file, which works fine.
However I want to run a sandbox instance as a test/staging environment, so I want to be able to set the environment of the newrelic agent so that it uses the different configuration sections of the ini configuration. My dotcloud.yml is set up as follows:
www:
type: python
config:
python_version: 'v2.7'
enable_newrelic: True
environment:
NEW_RELIC_LICENSE_KEY: *****************************************
NEW_RELIC_APP_NAME: Application Name
NEW_RELIC_LOG: /var/log/supervisor/newrelic.log
NEW_RELIC_LOG_LEVEL: info
NEW_RELIC_CONFIG_FILE: /home/dotcloud/current/newrelic.ini
I have custom environment variables so that the sanbox is set as "test" and the live application is set to "production"
I am then calling the following in my uswsgi.py
NEWRELIC_CONFIG = os.environ.get('NEW_RELIC_CONFIG_FILE')
ENVIRONMENT = os.environ.get('MY_ENVIRONMENT', 'test')
newrelic.agent.initialize(NEWRELIC_CONFIG, ENVIRONMENT)
However the dotcloud instance is already enabling newrelic because I get this in the uwsgi.log file:
Sun Nov 18 18:50:12 2012 - unable to load app 0 (mountpoint='') (callable not found or import error)
Traceback (most recent call last):
File "/home/dotcloud/current/wsgi.py", line 15, in <module>
newrelic.agent.initialize(NEWRELIC_CONFIG, ENVIRONMENT)
File "/opt/ve/2.7/local/lib/python2.7/site-packages/newrelic-1.8.0.13/newrelic/config.py", line 1414, in initialize
log_file, log_level)
File "/opt/ve/2.7/local/lib/python2.7/site-packages/newrelic-1.8.0.13/newrelic/config.py", line 340, in _load_configuration
'environment "%s".' % (_config_file, _environment))
newrelic.api.exceptions.ConfigurationError: Configuration has already been done against differing configuration file or environment. Prior configuration file used was "/home/dotcloud/current/newrelic.ini" and environment "None".
So it would seem that the newrelic agent is being initialised before uwsgi.py is called.
So my question is:
Is there a way to initialise the newrelic environment?
The easiest way to do this, without changing any code would be to do the following.
Create a new sandbox app on dotCloud (see http://docs.dotcloud.com/0.9/guides/flavors/ for more information about creating apps in sandbox mode)
$ dotcloud create -f sandbox <app_name>
Deploy your code to the new sandbox app.
$ dotcloud push
Now you should have the same code running in both your live and sandbox apps. But because you want to change some of the ENV variables for the sandbox app, you need to do one more step.
According to this page http://docs.dotcloud.com/0.9/guides/environment/#adding-environment-variables there are 2 different ways of adding ENV variables.
Using the dotcloud.yml's environment section.
Using the dotcloud env cli command
Whereas dotcloud.yml allows you to define different environment variables for each service, dotcloud env set environment variables for the whole application. Moreover, environment variables set with dotcloud env supersede environment variables defined in dotcloud.yml.
That means that if we want to have different values for our sandbox app, we just need to run a dotcloud env command to set those variables on the sandbox app, which will override the ones in your dotcloud.yml
If we just want to change on variable we would run this command.
$ dotcloud env set NEW_RELIC_APP_NAME='Test Application Name'
If we want to update more then one at a time we would do the following.
$ dotcloud env set \
'NEW_RELIC_APP_NAME="Test Application Name"' \
'NEW_RELIC_LOG_LEVEL=debug'
To make sure that you have your env varibles set correctly you can run the following command.
$ dotcloud env list
Notes
The commands above, are using the new dotCloud 0.9.x CLI, if you are using the older one, you will need to either upgrade to the new one, or refer to the documentation for the old CLI http://docs.dotcloud.com/0.4/guides/environment/
When you set your environment variables it will restart your application so that it can install the variables, so to limit your downtime, set all of them in one command.
Unless they are doing something odd, you should be able to override the app_name supplied by the agent configuration file by doing:
import newrelic.agent
newrelic.agent.global_settings().app_name = 'Test Application Name'
Don't call newrelic.agent.initialize() a second time.
This will only work if app_name is listing a single application to report data to.

Categories