Exment is a Web DB system made by Laravel (that is, LAMP, which means that it also works on a rental server). You can operate a relational database from the GUI management screen.
Exment is OSS and is GPL licensed. Development is centered on Kajitori Co., Ltd. For details, see Developer's article.
When you think of GUI Web DB, you might think of Cybozu's kintone.
I can't even hear the kintone letter from the developer, but I secretly think it's a kintone clone, and I think it's an OSS product with a lot of potential.
To be honest, I'm quite enthusiastic about it, but I won't explain my favorite points in this article. On another occasion.
Locally, you can Try Exment with Docker. Thank you for making it!
When I actually tried it locally (confirmed with macOS Mojave and Windows 10 Home WSL2), it was easy. It's amazing.
Then, it's human nature to want to host in the cloud and access it from outside.
I thought about various hosting destinations. For the front end only, there are free services such as Netlify and Vercel, but unfortunately this is a back end system ...
As mentioned earlier, Exment can also be operated on a rental server.
Installation procedure with Rensaba such as Sakura and X server is published on the Exment official website, but unfortunately, the account of Sakura and X server is open. I don't have it. In other words, I don't have a Rensaba contract. (Although Netlify and Vercel have some personal products that are close to the front desk)
I wonder if it's possible to rent a Rensaba just for Exment, so isn't VPS better? As a result of various investigations, I realized that Amazon's Lightsail is perfect for my use.
(The author is a VPS, AWS virgin at this point)
From logging in to Lightsail to creating an instance, the following articles are detailed.
This time, I chose the cheapest instance of 512MB RAM for $ 3.5 / month. The theme is to be as cheap as possible. If the specs aren't enough, it's a stance to scale up and think about how to deal with it.
Lightsail has an instance template that is a set of Web applications such as WordPress, but of course there is no Exment template that is still well known.
Therefore, select an OS-only instance. This time, I chose CentoOS7.
Exment officially publishes the procedure to install on CentOS 7, and there is an article to reinforce it. (See below)
-[Use the open source web database Exment with AWS Lightsail](https://assignment.co.jp/post/189545147785/%E3%82%AA%E3%83%BC%E3%83%97%E3% 83% B3% E3% 82% BD% E3% 83% BC% E3% 82% B9web% E3% 83% 87% E3% 83% BC% E3% 82% BF% E3% 83% 99% E3% 83% BC% E3% 82% B9-exment-% E3% 82% 92aws-lightsail-% E3% 81% A7% E5% 88% A9% E7% 94% A8% E3% 81% 99% E3% 82% 8B)
However, as you can see from the formula, there are quite a few steps. So, I actually did it once. In the process of throwing away the environment and starting over several times, it becomes "I can't do this procedure every time!" And I come to think of other methods.
** No, why not just put Docker on it? ** **
From here, I will describe the procedure including workarounds based on the addictive points. If you follow this, you can achieve Extension on Lightsail + Docker in the shortest time.
Then, I will install everything.
It's hard to add sudo every time, so I become root.
To become root with Lightsail, I referred to this article.
-Become root user on AWS lightsail instance
$ sudo su -
Install Docker. This article is detailed.
-[Introduction to Docker] Try to access nginx by putting docker + docker-compose on AWS lightsail
[root@ip-172-xx-xx-x ~]# yum install -y docker #install docker
[root@ip-172-xx-xx-x ~]# service docker start #Start docker
[root@ip-172-xx-xx-x ~]# groupadd docker #Be able to execute with user privileges
[root@ip-172-xx-xx-x ~]# usermod -g docker centos #Add centos user to the created group
[root@ip-172-xx-xx-x ~]# sudo /bin/systemctl restart docker.service
[root@ip-172-xx-xx-x ~]# docker info
You have now installed Docker. Next is Docker-compose.
[root@ip-172-xx-xx-x ~]# curl -L https://github.com/docker/compose/releases/download/1.21.0/docker-compose-$(uname -s)-$(uname -m) -o /usr/local/bin/docker-compose
[root@ip-172-xx-xx-x ~]# chmod +x /usr/local/bin/docker-compose
[root@ip-172-xx-xx-x ~]# docker-compose --version
Finally, install Git. Git that comes with yum of CentOS7 is 1.8.2 and older version. If you are interested, please refer to the article below and add a new one. This time, I just want to git clone, so install it with yum as it is.
-Install Git2 system on CentOS7 with yum
[root@ip-172-xx-xx-x ~]# yum install -y git
You have now installed Git as well.
Since it is an instance of 512MB of RAM, I ran out of memory when I moved to MySQL with Docker. (Addictive point 1), so let's create a swap area first.
I referred to the following article for how to make a swap.
-How to create CentOS7 swap file
[root@ip-172-xx-xx-x ~]# dd if=/dev/zero of=/swapfile bs=1M count=4096 status=progress
I was quite addicted to it because I made a mistake in writing the command (addiction point 2)
It's a good memory that I didn't know what file / dev / zero was and rm -rf in the process of trial and error.
I referred to this article to revive / dev / zero.
[root@ip-172-xx-xx-x ~]# mknod -m 666 /dev/zero c 1 5
[root@ip-172-xx-xx-x ~]# chown root:mem /dev/zero
The status = progress
option, but this time the swap area is as large as 4G, so I'm worried that the terminal will be silent for a while. So I added an option to let you know the progress of the dd command. I refer to the following article.
-How to display the progress of the dd command
Once the swap space is created with dd, proceed with the rest of the work quietly.
[root@ip-172-xx-xx-x ~]# chmod 600 /swapfile #Change permissions
[root@ip-172-xx-xx-x ~]# mkswap /swapfile #Create swap
Setting up swapspace version 1, size = 1048572 KiB
no label, UUID=d0519bf6-8abf-4c0d-9375-8068c9e5e9a1
[root@ip-172-xx-xx-x ~]# swapon /swapfile #Swap activation
[root@ip-172-xx-xx-x ~]# free -m #Swap confirmation
Finally, make the swap persistent. Open / etc / fstab with vi and
/swapfile swap swap defaults 0 0
That's it.
I introduced earlier pick up from Github of the person who made Docker of Exment.
[root@ip-172-xx-xx-x /home/centos/]# git clone https://github.com/yamada28go/docker-exment.git
I think the location can be anywhere, but I made it directly under the default user directory.
When it comes to https in Docker, https-portal seems to be famous.
First of all, when I installed Extension with ** https-portal set first, it didn't work! !! !! (Addictive point 3) **
So, before setting up SSL, proceed with the installation of Exment in the non-SSL state.
Do you want to read docker-compose.yml for the time being?
docker-compose.yml
version: '3'
services:
nginx:
image: nginx:latest
ports:
- 8080:80
volumes:
- ./nginx/nginx.conf:/etc/nginx/conf.d/default.conf
- www-data:/var/www
depends_on:
- php
php:
build: ./php
volumes:
- www-data:/var/www
depends_on:
- db
db:
image: mysql:5.7
ports:
- 13306:3306
volumes:
- mysql-data:/var/lib/mysql
environment:
MYSQL_DATABASE: exment_database
MYSQL_ROOT_PASSWORD: secret
MYSQL_USER: exment_user
MYSQL_PASSWORD: secret
phpmyadmin:
image: phpmyadmin/phpmyadmin:latest
ports:
- 8888:80
depends_on:
- db
#Define volumes
volumes:
#Specify the name of volume
#Exment installation path
www-data:
#Set ture to specify a volume that has already been created outside of Compose.
#Then docker-Compose does not try to create volume when compose up.
#And if the specified volume does not exist, an error will be raised.
#external: true
#mysql db installation path
mysql-data:
Since https-portal will use port 80 later, it is convenient that nginx is 8080. In particular, I will not edit anything as it is.
docker-compose.yml and Dockerfile don't touch anything, but only php.ini adds one line.
On the last line
memory_limit=-1
I will add (Additional point 4, details will be described later)
Finally ...
[root@ip-172-xx-xx-x /home/centos/]# docker-compose up -d
Since this is the first time, it will get the image, build it, and create the volume.
After booting is complete, use docker-comopse ps
to verify that all containers are up. (MySQL was exiting here when I wasn't making swaps)
Complete the following steps on the Lightsail side.
--Attach static IP --Opening port 8080
Try accessing xx.xx.xx.xx: 8080 with a browser, and if the initial screen of Laravel is displayed, it is successful.
Accessing xx.xx.xx.xx: 8080 / admin will bring you to the Exment initialization screen.
The screen consists of three parts
There are 3 screens. What you enter is the DB setting on the second screen.
Since I am using Docker, fill in the settings by referring to the article "I tried to create a Docker environment for Exment".
--Host name → db --Database name → extension_database --User name → extension_user --Password → secret
After this, if the installation is completed successfully, the initial screen of Extension will be displayed.
The Lightsail instance this time has a storage area of 20GB, so it's nice to move something, but as soon as you insert the binary data, it will become a bang.
Exment has the option to save various files to an external storage service.
-(For advanced users) Change file save destination
It's a big deal, so let's prepare it on AWS and save it in S3.
S3 was also a virgin, so I made a bucket by referring to the following article.
-How to make a bucket for AWS S3
Settings are also required on the Exment side. Go inside the container that is already up with docker-compose exec php bash
.
[root@ip-172-xx-xx-x /home/centos/]# docker-compose exec php bash
Now you can put it in the php container where Extension is running. If the prompt display changes to something like this, you are successful.
root@f54bef27801a:/var/www#
In other words, it has a dual structure of being in the Docker container php in Lightsail on the cloud.
Continue to modify the Exment (Laravel) file while reading "Change the save destination of the (advanced) file".
Modify / add the .env file in the Exment root folder with vim.
.env
EXMENT_DRIVER_EXMENT=s3
EXMENT_DRIVER_BACKUP=s3
EXMENT_DRIVER_TEMPLATE=s3
EXMENT_DRIVER_PLUGIN=s3
AWS_ACCESS_KEY_ID=(AWS S3 access key)
AWS_SECRET_ACCESS_KEY=(AWS S3 secret access key)
AWS_DEFAULT_REGION=(AWS S3 Region)
AWS_BUCKET_EXMENT=(AWS S3 bucket for use with attachments)
AWS_BUCKET_BACKUP=(AWS S3 bucket for backup)
AWS_BUCKET_TEMPLATE=(AWS S3 bucket for use in templates)
AWS_BUCKET_PLUGIN=(AWS S3 bucket for use with plugins)
It is not necessary to set all EXMENT_DRIVER_xxx
, only the ones that need to be set. You only need the necessary AWS_BUCKET_xxx
associated with it.
I wanted to save attachments and backup data in S3, so
EXMENT_DRIVER_EXMENT
EXMENT_DRIVER_BACKUP
AWS_BUCKET_EXMENT
AWS_BUCKET_BACKUP
I set only 4 of.
Finally, install the library with composer.
Earlier, I edited php.ini before docker-compose up -d
, but Composer eats up memory and quickly reaches the memory usage limit of PHP, so I set it to lift this limit did.
Composer takes a long time to install, so it's painful that it failed as a result of the time ... (I was very sad), so check this setting again before doing composer require
. Let's do it.
root@f54bef27801a:/var/www# php -i | grep memory_limit
memory_limit => -1 => -1
If it is -1, it is OK. (Meaning that there is no upper limit of memory)
Now it's time to install the library
root@f54bef27801a:/var/www# composer require league/flysystem-aws-s3-v3 ~1.0 -vvv
The -vvv
option is an instruction that means to log in the finest particle size. composer takes a long time to install, so I'm worried if the terminal is silent for a long time.
So, I will make sure to output logs one by one. If you've been silent for a long time with this option, it's really either stopped for some reason or you're doing a lot of heavy work ...
That's all for setting the option to upload attachments to S3. Make sure you have the files in your bucket by uploading a suitable image.
No, there is nothing wrong with https, but I was so addicted to it that I was bothered ...
I wondered if there was something wrong with it, and I reinstalled it from the Docker environment abandoned many times, so it took a lot of time.
As I said earlier, if you write docker-compose mixed with https-portal, the initial installation will fail. In the end, I ran into an unclear error and got stuck.
I said that there are 3 screens for installation, but when I press the install button on the last 3rd screen, that Laravel's gray error screen appears.
Use of undefined constant STDIN - assumed 'STDIN' (this will throw an Error in a future version of PHP)
I came out and couldn't insert data into the DB. I gave up here because I am not familiar with PHP and Laravel.
Since http worked fine, I gave up on killing this error after all, and switched to the strategy of completing the installation with http and then switching to https.
Export only the https-portal part.
docker-compose.yml
https-portal:
image: steveltn/https-portal:1
ports:
- '80:80'
- '443:443'
links:
- nginx
restart: always
volumes:
- ./certs:/var/lib/https-portal
environment:
STAGE: 'production'
DOMAINS: >-
example.com -> http://nginx:8080
depends_on:
- php
There is nothing particularly difficult. It's not much different from the official sample. The point is that if STAGE:
of enviroment:
is not 'production'
or 'staging'
, you will not be able to get the certificate of this Chan of Let's Encrypt. If this is not specified as local, it means that it is a local development, and a certificate will be issued. This was also a simple addictive point (addictive point 5)
Well, finally, if you visit https://example.com/ and the initial screen of Laravel is displayed, it will be fine and successful ...
I'd like to say, but there are still.
When you access https://example.com/admin, the screen collapses dramatically!
Looking at the browser console, all CSS / JSs sourced from http in mixed content are blocked.
At first I didn't know how to deal with this, but after trial and error, Laravel and Laravel-admin were determined to be the culprit! It seems that the path with http is output at the description like asset ('/ css / hoge.css') ...
The following are measures for this matter. Also, do docker-compose exec php bash
to get inside.
/var/www/exment/.env
APP_ENV=production
/var/www/exment/app/Providers/AppProvider.php
class AppServiceProvider extends ServiceProvider
public function boot()
{
if (config('app.env') === 'production') {
\URL::forceScheme('https'); //Scheme, not Schema!!!(Addictive point 6)
}
}
/var/www/exment/config/admin.php
/*
|--------------------------------------------------------------------------
| Access via `https`
|--------------------------------------------------------------------------
|
| If your page is going to be accessed via https, set it to `true`.
|
*/
'https' => env('ADMIN_HTTPS', true), //← The default is false, so set it to true!
Now all the assets are delivered from https and displayed without breaking the screen.
To be honest, I don't know if this is really the correct answer. Maybe AppProvider.php is unnecessary ... It seems that the way to write the code is slightly different depending on the version of Laravel. The reality is that I've read a lot of reference sources and managed to settle for working code.
If you have a better way, please let us know in the comments.
So far, it's working in a good mood.
This article does not touch on security, etc. to create an environment for verification purposes. If you are familiar with it, I would appreciate it if you could supplement it.
I want to write an article that compliments the wonderful part of Exment!
Recommended Posts