Last Update: 3-13-21 | Official Raspberry Pi Package now available – see bellow
Last Update: 11-9-20 | Updated ventz-media-pi for new Chromium version (v.84+) and new WideVine setup: v.4.10.1679.0+ NOTE: You MUST re-download and re-run it from this date to fix – lots of changes!
Last Update: 11-2-20 | Updated libwidevinecdm.so_.zip to v.4.10.1679.0 within ventz-media-pi
Last Update: 7-22-20 | Updated .desktop with Chrome User Agent string for CrOS Chrome/77.0.3865.120
Last Update: 7-20-20 | ~redacted~ company reached out about creating an official package, also there are conversations happening with ~redacted~ company about official support of the Pi
Last Update: 5-6-20 | Specify that “Raspbian with desktop” is assumed and tested
Last Update: 4-7-20 | Fixed screen tearing

Great News – This can finally be announced:) – There is now an official Raspberry Package with this work!!

Setup everything here with:

Everything from here down is the “pre-official-raspberry-package” info:

If you just want to take a Raspberry Pi 4 (as of today!) and turn it into a fully functional “Media” center by just pasting one line, here it is:

SSH to your Pi (don’t run from the Pi console if you want pretty graphics :)) and run:

This will produce the following (click to see larger image):

Reboot, and then from the Application Menu (top left) -> go to “Internet” -> open “Chromium (Media Edition)

You are now ready to use your Raspberry Pi on Netflix, Hulu, Amazon Prime, Disney Plus, HBO, Spotify, Pandora, and many others.

If you need to change any browser Settings, do so via the “Chromium” browser and not the “Chromium (Media Edition) – both are the same browser, so the settings are the same. The “Media Edition” (just a custom launcher with the user-agent) cannot load the settings since Chromium thinks you are launching it on “Chrome OS”, and when it checks for the Chrome OS language settings it crashes since they don’t exist.

At last, all this assumes the latest version of Raspbian with desktop (not Debian/Ubuntu! –
https://downloads.raspberrypi.org/raspbian_latest), and you can re-run the curl and sh as many times as you want without any harm.

If you are curious about some of the background and details on why you can’t easily run Netflix (and others) on your Pi and how to solve it, you are welcome to go look at my “development” blog post article: http://blog.vpetkov.net/2019/07/12/netflix-and-spotify-on-a-raspberry-pi-4-with-latest-default-chromium/

Enjoy!

As of 3-30-2020, if you want the “paste-one-line-it-just-works” go to:

http://blog.vpetkov.net/2020/03/30/raspberry-pi-netflix-one-line-easy-install-along-with-hulu-amazon-prime-disney-plus-hbo-spotify-pandora-and-many-others/

^^^ PLEASE USE POST USING ABOVE LINK FOR MY “ONE-LINE IT JUST WORKS” ^^^
(IGNORE EVERYTHING BELLOW THIS AS IT’S THE ORIGINAL DEVELOPMENT WORK)
===================================================================

Everything from this point down is out of date as of: 3-30-2020

This was my initial “netflix on the raspberry pi 4 development” blog post. Leaving it on here due to the comments, initial work, info for those interested, but I highly recommend using the easy method above (linked).

Last libwidevine extract: 3-29-2020 – v.4.10.1610.6 of libwidevine – EVERYTHING CONFIRMED WORKING


Chromium has made substantial changes the way libwidevine (and a few major things around DRM) are loaded/used/etc. They have also made changes to the setting and reading of the user-agent propagation. For some time (~2 months or so) — the combination of this badly broke Netflix. It seems they have undone the lib loading in the last couple of versions, and user @Spartacuss discovered the user-agent fix.

The instructions here (as of 3-29-2020) work for: Netflix, Hulu, HBO, Disney+, Amazon Prime, Spotify, Pandora, and many others.

The Raspberry Pi 4 model with 4GB of RAM is the first cheap hardware that can provide a real “desktop-like” experience when browsing the web/watching Netflix/etc. However, if you have tried to run Netflix on the Pi, you have quickly entered the disgusting mess that exists around DRM, WideVine (Netflix being one example of something that needs it), and Chromium.

After hours and hours of effort, I finally discovered a quick and elegant solution that lets you use the latest default provided Chromium browser, without having to recompile anything in order to watch any WideVine/DRM (Netflix, Spotify, etc) content.

Background and the DRM Problem

If you are not familiar with this, the short version is that Netflix (and many others, ex: Spotify) use the WideVine “Content Protection System” – aka DRM, and if you want to watch Netflix or something else that uses it, you need to have a WideVine plugin+browse supported integration. Chrome, Firefox, and Safari make it available for x86/amd64 systems, but not for ARM since technically they don’t have ARM builds.

Chromium, the project Chrome is based on, does have an ARM build, but it does not include any DRM support, and technically it does not include widevine support by default (*caveat here, which helps us later)
So long story short, the question becomes “how do you enable DRM/WideVine support in Chromium?”.

It seems there are two main solutions out there: use an old (v51, 55, 56, 60) version of Chromium which has been “patched” with widevine support (kusti8’s version seems to be the most popular one – except since the new Netflix changes, that also does not work), which requires uninstalling the latest Chromium available, installing the old/patched one, and dropping in older widevine plugins; the second option is to use Vivaldi – a proprietary fork of Opera which also has been “sort of patched”, but it still needs a valid libwidevinecdm plugin (see bellow) and it has it’s own issues (and also…it’s Opera…in 2019…who uses Opera?)

After a lot of research and trial and error, I discovered a much more elegant solution – use the extracted ChromeOS (armv7l – yay) binaries and insert them into Chromium + make everything think it’s ChromeOS (user agent)

Netflix/Hulu/Spotify with the Default Raspberry Pi Chromium Browser

Continue Reading →Netflix and Spotify on a Raspberry Pi 4 with Latest Default Chromium

If you did not know about this – you should be very worried.

A few hours ago it was discovered that Apple’s FaceTime app allows anyone to connect to any Apple device that supports FaceTime and hear their audio without the person ever accepting the call. What’s worse (debatably?) is that it is incredibly easy to do this.

Listen-in on any remote Apple device in 4 steps

1.) Start a FaceTime call with someone
2.) While the call is “connecting”, quickly swipe up on the FaceTime menu
3.) Click on the “+ Add Person”
4.) Add your own phone number/contact

What happens at this point is that the call is “bridged” and a remote audio line is open to the destination Apple device.

Yes – really! You have now turned the remote Apple device into a remote audio tap/bug. You can choose to keep a 2-way audio channel, or mute it from your side.

Update #1: Apparently it works on Mac OS Mojave too.

Temporary Fix – disable FaceTime

As of right now, until Apple patches this, the only fix is to disable FaceTime:

iOS

1.) open “Settings”
2.) click on “FaceTime”
3.) toggle it “off” (green toggle -> to white)

Mac OS Mojave

1.) open “FaceTime” app
2.) press “command + K”
or
2.) click on the top-left menu bar with the app name “FaceTime”, and select “Turn FaceTime Off”

A long time ago I became frustrated with having to update my WordPress plugins manually, so I created a Perl script and a blog post (https://blog.vpetkov.net/2011/08/03/script-to-upgrade-plugins-on-wordpress-to-the-latest-version-fully-automatically/) that explained how to automate this. The idea was quite simple: feed a plugin name, have the script check the WordPress plugins page for the latest versioned download, grab it, and extract it over the specified blog plugins directory and thus update the plugin.

The script was simple and it worked very well. It made dealing with plugins many times easier. However, there was one big down side as some users pointed out — it did not actually check if a plugin needed to be updated. It blindly replaced the current plugin with the latest version. This meant that there was no way to “efficiently” automate it. If you cron-ed it directly, it would simply pull and update all your plugins at whatever period you specified. For the longest time this really irritated me, but I didn’t have time to dig through WordPress to understand how the engined checked and signaled for local plugins. One particular user (Joel) forked a copy and made many improvements to deal with this specific issue.

As time went on, I decided to look at this problem again. A couple of years ago I solved it in a really elegant way, but I didn’t have time to update the blog post. A few days ago, after looking at the blog statistics, I realized that the WordPress article was one of the top 10 most popular ones. So, with that said, here is:

A new simple and elegant solution

The idea is to use the WordPress CLI in order to “query” the local plugins database for plugin names, version number, and “activated” status, and then compare the “local” plugin version with the “remote” plugin version. If the plugin is active and in need of an update, fall back to my original Perl script to update it. Aha! And now we have something that can be cron-ed 🙂

To get started, first grab the WP CLI utility. We are going to rename it, move it to an accessible place, and take care of permissions so that we can use it:

Continue Reading →Easy fully automated WordPress plugin update system

NOTE: Updated code 10-27-2018

In this day and age where everything is measured, recorded, and available remotely (via a REST API most of the time!), it really bothered me that our heating oil tank measured the remaining gallons of oil by a crude plastic dip stick. It’s not accurate, there is no historical data, and there is no way to audit (for honesty, accuracy, or problems/errors).

So the problem is simple enough: Find a quick and easy way to remotely monitor the number of gallons of heating oil in a home, and alert at pre-set intervals (let’s say 75%, 50%, and 25%) of remaining oil in the tank.

After looking for commercial solutions, the cheapest one I found is $120 with a $10/year fee. In my view, that’s simply ridiculous. I decided that I could build something better for 1/3rd of the price ($40), without an yearly fee.

Hardware How-To

Start with this Instructable I created with the exact parts/steps, and with lots of pictures:
https://www.instructables.com/id/Monitor-Heating-Oil-Tank-Gallons-With-Email-SMS-an/

This should take care of the hardware side.
Continue Reading →DIY – Monitor Heating Oil Tank Gallons with Pushbullet, SMS, and Email Alerting

You need to connect to a Cisco AnyConnect (or Juniper Pulse Connect) VPN, and you cannot stand the default client for a variety of reasons (slow connects, crashes, unable to start, pointless pop-up notifications, crashes, pid-loss, etc), and so, you look for alternatives.

You find OpenConnect – the perfect solution, only to realize that the 3rd-party GUI is basically broken and actually doesn’t work (last checked on 8-14-17) with 2-Factor authentication (ex: Duo).

At this point, you can run OpenConnect from a terminal, which works, but you have to keep the terminal open and you have to wrap the long command in a shell script.

Or, you can use my little solution which seems to work perfectly.

OpenConnect GUI - Connected

OpenConnect GUI - Disconnected

Everything you need to get started is on GitHub:
https://github.com/ventz/openconnect-gui-menu-bar

Continue Reading →The perfect OpenConnect GUI Menu Bar App with 2FA/Duo support – for Mac OS X

I needed a way to monitor Docker resource usage and metrics (CPU, Memory, Network, Disk). I also wanted historical data, and ideally, pretty graphs that I could navigate and drill into.

Whatever the solution was going to be, it had to be very open and customizable, easy to setup and scale for a production-like environment (stability, size), and ideally cheap/free. But most of all — it had to make sense and really be straight forward.

3 Containers and 10 minutes is all you need

To get this:
docker_metrics01

docker_metrics02
There are 3 components that are started via containers:

Grafana (dashboard/visual metrics and analytics)
InfluxDB (time-series DB)
Telegraf (time-series collector) – 1 per Docker host

The idea is that you first launch Grafana, and then launch InfluxDB. You configure Grafana (via the web) to point to InfluxDB’s IP, and then you setup a Telegraf container on each Docker host that you want to monitor. Telegraf collects all the metrics and feeds them into a central InfluxDB, and Grafana displays them.

Setup Tutorial/Examples

Continue Reading →Monitor Docker resource metrics with Grafana, InfluxDB, and Telegraf

If you have not used Swarm, skim the non-service-discovery tutorial to get a feel for how it works:
https://blog.vpetkov.net/2015/12/07/docker-swarm-tutorial-and-examples. It’s very easy, and it should give you an idea of how it works within a couple of minutes.

Using Swarm with pre-generated static tokens is useful, but there are many benefits to using a service discovery backend. For example, you can utilize network overlays and have common “bridges” that span multiple hosts (https://docs.docker.com/engine/userguide/networking/get-started-overlay/). It also provides service registration and discovery for the Docker containers launched into the Swarm. Now lets get into how to use it with service discovery – which is what you would use in a scaled out environment/production.

Again, assuming you have a bunch of servers running docker:
vm01 (10.0.0.101), vm02 (10.0.0.102), vm03 (10.0.0.103), vm04 (10.0.0.104)

Normally, you can do “docker ps” on each host for example:
ssh vm01 ‘docker ps’
ssh vm04 ‘docker ps’

If you enable the API for remote bind on each host you can manage them from a central place:
docker -H tcp://vm01:2375 ps
docker -H tcp://vm04:2375 ps
(note: port is optional for default)

But if you want to use all of these docker engines as a cluster, you need Swarm.
Here we will go one step further and use a common service discovery backend (Consul).

Docker Swarm Tutorial with Consul and How-To/Examples

Continue Reading →Docker Swarm Tutorial with Consul (Service Discovery) and Examples

[ updated 10-30-2016 | Upgraded Plex to plexmediaserver-1.1.4.2757-24ffd60.x86_64.rpm and CentOS ]

Recently I tried setting up a Plex server in a docker container. The first problem was the 127.0.0.1:32400 bind which required logging in locally or port forwarding. After doing this once, I realized that you could use the Preferences.xml file, but that meant that you couldn’t truly automate this/deploy it elegantly in a docker container. And what if you wanted to run other servers — for friends? I finally figured out how to do this in the most elegant way possible.

First – Grab your Unique Plex Access Token

Login at https://app.plex.tv/web/app with your username and password
Open your javascript console (in Chrome: View -> Developer -> JavaScript Console)
and type:
console.log(window.PLEXWEB.myPlexAccessToken);

Note the token, which will look like this: “PZwoXix8vxhQJyrdqAbY”

At this stage DO NOT click log out of your account until you register the new server. Otherwise your token will regenerate.
Once you register the server, it won’t matter after that if the token changes.

Grab my Docker Image

Check out: https://hub.docker.com/r/ventz/plex/
You can pull it down by doing:

Continue Reading →Plex server on a VPS Docker setup without port forwarding

A bit of background and the “old/normal way”

If you use Docker, you very quickly run into a common question: how do you make Docker work across multiple hosts, datacenters, and different clouds. One of the simplest solutions is Docker Swarm. Docker summarizes it best as “a native clustering for Docker…[which] allows you create and access to a pool of Docker hosts using the full suite of Docker tools.”

One of the biggest benefits to using Docker Swarm is that it provides the standard Docker API, which means that all of the existing Docker management tools (and 3rd party products) just work out of the box as they do with a single host. The only difference is that they now scale transparently over multiple hosts.

After reading up on it HERE and HERE, it was evident that this is a pretty simple service, but it wasn’t 100% clear what went where. After searching around the web, I realized that almost all of the tutorials and examples on Docker Swarm involved either docker-machine or very convoluted examples which did not explain what was happening on which component. With that said, here is a very simple Docker Swarm Tutorial with some practical examples.

Assuming you have a bunch of servers running docker:
vm01 (10.0.0.101), vm02 (10.0.0.102), vm03 (10.0.0.103), vm04 (10.0.0.104)

Continue Reading →Docker Swarm Tutorial and Examples