SolarEdge SE3000H monitoring
Locally monitor your SolarEdge solar panel installation. For example by using Telegraf, InfluxDB and Grafana running on a Raspberry Pi.
There are, at the time of writing, 2 ways of getting data from your solar panel installation:
Use the SolarEdge cloud API. SolarEdge logs metrics in their cloud which one can access through their mobile app or website. Your inverter must be connected to the internet. It contains rich data, per panel if you have optimizers, and through its API one can fetch all data locally (max 300 requests a day). The update interval is 15 minutes, so not suitable for a live dashboard. It is not possible to read out this data locally through LAN or Wifi (although many people would like so), although the webserver is running on your inverter access is blocked1. According to SolarEdge’s support for security reasons.
SunSpec Modbus protocol (over TCP or external RS485 meter) is also supported by the inverter. Data can be requested supporting a maximum update interval of 1 second. Modbus is disabled by default. However per panel data, if you have optimizers, cannot be queried. Numerous people are hoping this will be supported one day. The Modbus interface is an open standard so often supported by logging software out of the box. All data is read-only2.
Both methods can be used at the same time and do not interfere with one another. Both are officially supported and documented.
SolarEdge has a bit of a backwards set-up where the level of detail available in the cloud platform is determined by the access give to you by your installer. Hence some people might see more that than others. However anyway can create an installer account on SolarEdges’s website, so if you want to see all data and your installer does not want to co-operate one can circumvent them. This is not required for this post’s use-case.
This post is written for the SH3000H inverter (no LCD). My set-up is very straight forward, just a single converter without battery or meters.
While the method itself should work for all models some configuration steps on your inverter might differ, consult SolarEdge’s documentation.
Monitoring set-up
This post will show both Modbus and Cloud logging. They are independent of each other.
This post assumes you have Telegraf, InfluxDb (2) and Grafana installed on your machine, for example a Raspberry Pi. One can substitute InfluxDb with another timeseries database (e.g. Prometheus) and Grafana with another visualization tool (e.g. Chronograf) as desired. To install, and set-up, these applications see this and this post.
Telegraf can be replaced by custom scripts, there are quite a few open source options available where Python seems the most popular at the time of writing. However Telegraf has built-in Modbus support which simplifies the set-up (especially if you already have Telegraf running anyway).
Configure your inverter
This information can also be found in SolarEdge’s official documentation.
Configuration can be done though SolarEdge’s SetApp mobile application, however not every installer enables this for its customers (you).
Enable MODBUS over TCP: Site Communication > Modbus TCP > Enable. Default port is 1502, default device ID is 1.
The TCP server idle time is 2 minutes. In order to leave the connection open, at least 1 request should be made within 2 minutes. The connection can remain open without any MODBUS requests.
Hence if your inverter is re-started and your logging infrastructure does not send a ModBus request within 2 minutes all subsequent requests will fail.
For a more easy configuration of the logging infrastructure it is advised to assign your inverter a static IP: Site Communication > Ethernet > Static and fill in the network details.
Note: from experience, if the inverter is reset, P switched to the right, the IP setting might be reset to dynamic again (the default).
Configure InfluxDB
InfluxDB can be configured through its web UI or API.
If desired, or not already done so, generate a dedicated access token for per bucket for Telegraf.
Modbus over TCP
Create a new bucket, e.g. called solaredge
with retention policy 1d.
If the data is logged with relative high frequency, e.g. to provide a live dashboard, and one wants to retain it for longer periods of time downsampling the data to save storage space is strongly advised. It will also positively impact the query performance due to there being less data. Create another bucket, canonically called [bucket name]_[downsample interval]
, for example solaredge_15m
. The raw measurements will be logged in solaredge
, get periodically downsampled (aggregated) and saved to solaredge_15m
. One may create as much downsample levels as desired, for example keep 15m downsampled data for 1 month and have a 1d downsampled bucket retained forever. Make sure the retention policy for each bucket is properly set. I will stick to 1 downsample level, 15 minutes, keeping the data forever.
Note: due to the nature of the set-up the downsampled bucket(s) will never contain real time data.
Note: for backups the raw bucket, solaredge
, may be ignored as the data is only kept for 1d anyway.
In InfluxDb create a new task running every hour for example:
{
"meta": {
"version": "1",
"type": "task",
"name": "Downsample Modbus SolarEdge-Template",
"description": "template created from task: Downsample Modbus SolarEdge"
},
"content": {
"data": {
"type": "task",
"attributes": {
"status": "active",
"name": "Downsample Modbus SolarEdge",
"flux": "option task = {name: \"Downsample Modbus SolarEdge\", cron: \"0 * * * *\", offset: 1m}\n\nsource_bucket = \"solaredge\"\ndestination_bucket = \"solaredge_15m\"\ndestination_org = \"piserver1\"\naggregateField = (data=<-, name) => {\n\tfilteredData = data\n\t\t|> filter(fn: (r) =>\n\t\t\t(r[\"_field\"] == name))\n\tmedian = filteredData\n\t\t|> median()\n\t\t|> drop(columns: [\"_start\"])\n\t\t|> rename(columns: {_stop: \"_time\"})\n\t\t|> set(key: \"aggType\", value: \"median\")\n\tmax = filteredData\n\t\t|> max()\n\t\t|> drop(columns: [\"_start\", \"_time\"])\n\t\t|> rename(columns: {_stop: \"_time\"})\n\t\t|> set(key: \"aggType\", value: \"max\")\n\tmin = filteredData\n\t\t|> min()\n\t\t|> drop(columns: [\"_start\", \"_time\"])\n\t\t|> rename(columns: {_stop: \"_time\"})\n\t\t|> set(key: \"aggType\", value: \"min\")\n\n\tunion(tables: [min, max, median])\n\t\t|> to(bucket: destination_bucket, org: destination_org)\n\n\treturn data\n}\ndata = from(bucket: source_bucket)\n\t|> range(start: -1h)\n\t|> filter(fn: (r) =>\n\t\t(r[\"_measurement\"] == \"inverter\"))\n\t|> window(every: 15m)\n\t|> aggregateField(name: \"I_AC_Current\")\n\t|> aggregateField(name: \"I_AC_CurrentA\")\n\t|> aggregateField(name: \"I_AC_CurrentB\")\n\t|> aggregateField(name: \"I_AC_CurrentC\")\n\t|> aggregateField(name: \"I_AC_VoltageAB\")\n\t|> aggregateField(name: \"I_AC_VoltageBC\")\n\t|> aggregateField(name: \"I_AC_VoltageCA\")\n\t|> aggregateField(name: \"I_AC_VoltageAN\")\n\t|> aggregateField(name: \"I_AC_VoltageBN\")\n\t|> aggregateField(name: \"I_AC_VoltageCN\")\n\t|> aggregateField(name: \"I_AC_Power\")\n\t|> aggregateField(name: \"I_AC_Frequency\")\n\t|> aggregateField(name: \"I_AC_VA\")\n\t|> aggregateField(name: \"I_AC_VAR\")\n\t|> aggregateField(name: \"I_AC_PF\")\n\t|> aggregateField(name: \"I_DC_Current\")\n\t|> aggregateField(name: \"I_DC_Voltage\")\n\t|> aggregateField(name: \"I_DC_Power\")\n\t|> aggregateField(name: \"I_Temp\")\n\ndata\n\t|> filter(fn: (r) =>\n\t\t(r[\"_field\"] == \"I_AC_Energy_WH\"))\n\t|> last()\n\t|> drop(columns: [\"_start\", \"_time\"])\n\t|> rename(columns: {_stop: \"_time\"})\n\t|> to(bucket: destination_bucket, org: destination_org)",
"cron": "0 * * * *",
"offset": "1m"
},
"relationships": {
"label": {
"data": []
}
}
},
"included": []
},
"labels": []
}
Note: while a reduce()
function (min, max, median at once) might seem more efficient, empirically measured it is >x5 slower.
Note: the status is not saved when downsampled.
See also this Influxdb blogpost.
SolarEdge Cloud API
Create a new bucket, e.g. called solaredge_cloud
, keep the data forever.
Configure Telegraf
Remember to restart the Telegraf service when a config file has been changed.
Modbus over TCP
Info based on the latest technical note 2.3, Feb 2021. For the latest version consult the “SolarEdge’s SunSpec implementation Technical note” on their website.
Only a single host/application/script can interact with the Modbus of the inverter at the same time. See SolarEdge’s documentation for more info.
Add the Telegraf config file in /etc/telegraf/telegraf.d/solaredge_modbus.conf
and fill in your outputs.influxdb_v2
details.
[[outputs.influxdb_v2]]
urls = ["http://127.0.0.1:8086"]
token = ""
organization = ""
bucket = "solaredge"
namepass = ["inverter"]
# ------------------------------------------------ Inputs --------------------------------------------
[[inputs.modbus]]
interval = "15s"
name_override="inverter"
name = "NOT USED" # Not used
tagexclude = ["type", "name", "host"]
slave_id = 1
timeout = "5s"
controller = "tcp://192.168.1.12:1502"
# Note: most static data is omitted (pointless to monitor)
# Note: battery and meter data is omitted (as i have none)
#
# Always 0/not used:
# { name = "I_Status_Vendor", byte_order = "AB", data_type = "UINT16", scale=1.0, address = [108]}
# Disabled for now since Telegraf does not support reading strings
# { name = "c_serialnumber", address = [52, 67]},
holding_registers = [
{ name = "C_SunSpec_DID", byte_order = "AB", data_type = "UINT16", scale=1.0, address = [69]},
{ name = "I_AC_Current", byte_order = "AB", data_type = "UINT16", scale=1.0, address = [71]},
{ name = "I_AC_CurrentA", byte_order = "AB", data_type = "UINT16", scale=1.0, address = [72]},
{ name = "I_AC_CurrentB", byte_order = "AB", data_type = "UINT16", scale=1.0, address = [73]},
{ name = "I_AC_CurrentC", byte_order = "AB", data_type = "UINT16", scale=1.0, address = [74]},
{ name = "I_AC_Current_SF", byte_order = "AB", data_type = "INT16", scale=1.0, address = [75]},
{ name = "I_AC_VoltageAB", byte_order = "AB", data_type = "UINT16", scale=1.0, address = [76]},
{ name = "I_AC_VoltageBC", byte_order = "AB", data_type = "UINT16", scale=1.0, address = [77]},
{ name = "I_AC_VoltageCA", byte_order = "AB", data_type = "UINT16", scale=1.0, address = [78]},
{ name = "I_AC_VoltageAN", byte_order = "AB", data_type = "UINT16", scale=1.0, address = [79]},
{ name = "I_AC_VoltageBN", byte_order = "AB", data_type = "UINT16", scale=1.0, address = [80]},
{ name = "I_AC_VoltageCN", byte_order = "AB", data_type = "UINT16", scale=1.0, address = [81]},
{ name = "I_AC_Voltage_SF", byte_order = "AB", data_type = "INT16", scale=1.0, address = [82]},
{ name = "I_AC_Power", byte_order = "AB", data_type = "INT16", scale=1.0, address = [83]},
{ name = "I_AC_Power_SF", byte_order = "AB", data_type = "INT16", scale=1.0, address = [84]},
{ name = "I_AC_Frequency", byte_order = "AB", data_type = "UINT16", scale=1.0, address = [85]},
{ name = "I_AC_Frequency_SF", byte_order = "AB", data_type = "INT16", scale=1.0, address = [86]},
{ name = "I_AC_VA", byte_order = "AB", data_type = "INT16", scale=1.0, address = [87]},
{ name = "I_AC_VA_SF", byte_order = "AB", data_type = "INT16", scale=1.0, address = [88]},
{ name = "I_AC_VAR", byte_order = "AB", data_type = "INT16", scale=1.0, address = [89]},
{ name = "I_AC_VAR_SF", byte_order = "AB", data_type = "INT16", scale=1.0, address = [90]},
{ name = "I_AC_PF", byte_order = "AB", data_type = "INT16", scale=1.0, address = [91]},
{ name = "I_AC_PF_SF", byte_order = "AB", data_type = "INT16", scale=1.0, address = [92]},
{ name = "I_AC_Energy_WH", byte_order = "ABCD", data_type = "INT32", scale=1.0, address = [93, 94]},
{ name = "I_AC_Energy_WH_SF", byte_order = "AB", data_type = "UINT16", scale=1.0, address = [95]},
{ name = "I_DC_Current", byte_order = "AB", data_type = "UINT16", scale=1.0, address = [96]},
{ name = "I_DC_Current_SF", byte_order = "AB", data_type = "INT16", scale=1.0, address = [97]},
{ name = "I_DC_Voltage", byte_order = "AB", data_type = "UINT16", scale=1.0, address = [98]},
{ name = "I_DC_Voltage_SF", byte_order = "AB", data_type = "INT16", scale=1.0, address = [99]},
{ name = "I_DC_Power", byte_order = "AB", data_type = "UINT16", scale=1.0, address = [100]},
{ name = "I_DC_Power_SF", byte_order = "AB", data_type = "INT16", scale=1.0, address = [101]},
{ name = "I_Temp", byte_order = "AB", data_type = "UINT16", scale=1.0, address = [103]},
{ name = "I_Temp_SF", byte_order = "AB", data_type = "INT16", scale=1.0, address = [106]},
{ name = "I_Status", byte_order = "AB", data_type = "UINT16", scale=1.0, address = [107]}
]
# Must come last because Telegraf
[inputs.modbus.tags]
site = "" # Site ID can be found on SolarEdge's website
sn = "" # Inverter serial
# Apply the scaling and drop the scaling fields.
[[processors.starlark]]
namepass = ["inverter"]
source = '''
def scale(metric, name):
metric.fields[name] *= pow(metric.fields[name + "_SF"])
metric.fields.pop(name + "_SF")
def pow(exp):
# It works i suppose
return float("1e{}".format(exp))
def drop(metric, name):
metric.fields.pop(name)
metric.fields.pop(name + "_SF")
def apply(metric):
type = metric.fields["C_SunSpec_DID"]
metric.fields.pop("C_SunSpec_DID")
I_AC_Voltage_Scale = pow(metric.fields["I_AC_Voltage_SF"])
metric.fields["I_AC_VoltageAB"] *= I_AC_Voltage_Scale
metric.fields["I_AC_VoltageBC"] *= I_AC_Voltage_Scale
metric.fields["I_AC_VoltageCA"] *= I_AC_Voltage_Scale
metric.fields["I_AC_VoltageAN"] *= I_AC_Voltage_Scale
metric.fields["I_AC_VoltageBN"] *= I_AC_Voltage_Scale
metric.fields["I_AC_VoltageCN"] *= I_AC_Voltage_Scale
metric.fields.pop("I_AC_Voltage_SF")
# Drop meaningless measurements
if type == 101: # Single Phase
metric.fields.pop("I_AC_VoltageBC")
metric.fields.pop("I_AC_VoltageCA")
metric.fields.pop("I_AC_VoltageAN")
metric.fields.pop("I_AC_VoltageBN")
metric.fields.pop("I_AC_VoltageCN")
elif type == 102: # Split Phase
metric.fields.pop("I_AC_VoltageCA")
metric.fields.pop("I_AC_VoltageCN")
scale(metric, "I_AC_Frequency")
scale(metric, "I_DC_Voltage")
scale(metric, "I_Temp")
# Drop obsolete measurements at night/sleep mode to reduce stored data size.
if metric.fields["I_Status"] == 2:
drop(metric, "I_AC_Current")
metric.fields.pop("I_AC_CurrentA")
metric.fields.pop("I_AC_CurrentB")
metric.fields.pop("I_AC_CurrentC")
drop(metric, "I_AC_Power")
drop(metric, "I_AC_VA")
drop(metric, "I_AC_VAR")
drop(metric, "I_AC_PF")
drop(metric, "I_AC_Energy_WH")
drop(metric, "I_DC_Current")
drop(metric, "I_DC_Power")
else:
I_AC_Current_Scale = pow(metric.fields["I_AC_Current_SF"])
metric.fields["I_AC_Current"] *= I_AC_Current_Scale
metric.fields["I_AC_CurrentA"] *= I_AC_Current_Scale
metric.fields["I_AC_CurrentB"] *= I_AC_Current_Scale
metric.fields["I_AC_CurrentC"] *= I_AC_Current_Scale
metric.fields.pop("I_AC_Current_SF")
# Drop obsolete measurments
if type == 101: # Single Phase
metric.fields.pop("I_AC_CurrentB")
metric.fields.pop("I_AC_CurrentC")
elif type == 102: # Split Phase
metric.fields.pop("I_AC_CurrentC")
scale(metric, "I_AC_Power")
scale(metric, "I_AC_VA")
scale(metric, "I_AC_VAR")
scale(metric, "I_AC_PF")
scale(metric, "I_AC_Energy_WH")
scale(metric, "I_DC_Current")
scale(metric, "I_DC_Power")
# Convert serial to tag
#metric.tags["sn"] = metric.fields["C_SerialNumber"]
#metric.fields.pop("C_SerialNumber")
# Correct the type, we multiply by float but still some are reported as int (for reasons unkown).
for k, v in metric.fields.items():
if k != "I_Status":
metric.fields[k] = float(v)
return metric
'''
To test if it works run: telegraf -config /etc/telegraf/telegraf.d/solaredge_modbus.conf -test
.
SolarEdge Cloud API
Telegraf supports reading data from HTTP(S) endpoints, however SolarEdge’s API requires, for most calls, to specify the desired time range (e.g. last hour). This requires information about the update interval and dynamic urls; neither are supported by Telegraf at the moment. Alas.
As alternative there are a couple of options: create a script and call it with an execd
input or create a service (systemd) which writes to file/tcp (unix domain)/udp (which is read by Telegraf). One could also opt to take Telegraf completely out of the equation since logging to InfluxDb is easy to implement and some scripting is required anyway. All have pro’s and con’s.
I went for a Python script running as a service (execd
) due to it being only a single file (compiled languages typically require some build/project files as well).
The Python daemon, copy the file to /var/lib/telegraf/
and mark it as executable: chmod +x /path
. Do not forget to fill in the token (found on the solaredge website) and login details; top of the script.
Install the Python packages:
python-requests
,python-pytz
Some (generated) files are stored in the
telegraf
’s user home folder,/var/lib/telegraf/
.
The script is only tested on Linux, YMMV.
To get all data multiple requests are required, at the time of writing the rate limit is 300 requests per day. See SolarEdge’s documentation for more info. This limits the update rate. The Modbus data will be used for a live dashboard and the cloud data will only be scraped once a day.
By default the panel data will contain 2 aggregate values, both with a particular id. They are typically not of interest and can be excluded in this script (exercise for the reader) or when visualizing the data (e.g. in Grafana).
|
|
Add the Telegraf config file in /etc/telegraf/telegraf.d/solaredge_cloud.conf
and fill in the InfluxDb details.
[[outputs.influxdb_v2]]
urls = ["http://127.0.0.1:8086"]
token = ""
organization = ""
bucket = "solaredge_cloud"
namepass = ["power","energy","data","panel"]
# ------------------------------------------------ Inputs --------------------------------------------
[[inputs.execd]]
tagexclude = ["host"]
command = ["/var/lib/telegraf/solarEdgeCloudScraper.py"]
signal = "none"
restart_delay = "10m"
data_format = "influx"
To test if it works run: telegraf -config /etc/telegraf/telegraf.d/solaredge_cloud.conf -test
. If you want to print something to debug in Python use print_err(...)
.
Configure Grafana
Given the data logged from the cloud and modbus is different two different dashboards are required.
If you also have a battery and meter more stats can be displayed, for inspiration see this German forum. However some tinkering will be required as they do not use InfluxDb 2. Or this Dutch forum.
This post is written for Grafana 7.5
Make sure to fill in your site and inverter serial in the dashboards. And/or buckets if you have used different names.
Modbus dashboard
Add the solaredge
InfluxDb bucket as “SolarEdge Modbus” Flux (query language) data source.
To get the Sun Altitude install the
“Sun and Moon” datasource, grafana-cli plugins install fetzerch-sunandmoon-datasource
, restart the grafana service and add the data source. This information together with the power production can be used to eyeball how good a day it was for power production (i.e. optimal vs actual produced electricity).
Import the Grafana SolarEdge modbus dashboard.
The dashboard is also available on
Grafana with id: 14168
.
Result
Grafana 7d dashboard overview





Overview of the full dashboard
SolarEdge Cloud dashboard
Add the solaredge_cloud
InfluxDb bucket as “SolarEdge Cloud” Flux (query language) data source.
Import the Grafana SolarEdge cloud dashboard.
The dashboard is also available on
Grafana with id: 14169
.
Result
Grafana dashboard overview






Overview of the full dashboard
Import historical data from SolarEdge Cloud
The python script to fetch data from SolarEdge’s cloud also supports scraping the full history, store the following config somewhere (not in the Telegraf folder, e.g. /home/alarm/solaredge_cloud_history.conf
as it only has to be run once) and fill in your outputs.influxdb_v2
details. As it only needs to be executed once.
[agent]
debug = true
quiet = false
metric_buffer_limit = 1000000 # Enlarge as required if you have a lot of history
omit_hostname = true
[[outputs.influxdb_v2]]
urls = ["http://127.0.0.1:8086"]
token = ""
organization = ""
bucket = "solaredge_cloud"
# ------------------------------------------------ Inputs --------------------------------------------
[[inputs.exec]]
command = "/var/lib/telegraf/solarEdgeCloudScraper.py history"
timeout = "604800s" # 7d
data_format = "influx"
Run the following command, once: telegraf --once --config /home/alarm/solaredge_cloud_history.conf
. Afterwards remove the config and generated files from the users home folder (e.g. /home/alarm/
).
with exploits it was possible in the past to open access to the internal server allowing one to log all data available in the cloud locally at a higher update interval. However at the time of writing all known exploits are patched. Some people opted to disable automatic firmware updates to remain on an exploitable firmware version, this is however not possible for new customers. Some man in the middle (MITM) attacks are still possible and there are also hardware exploits, neither i do not recommend so will not discuss further. ↩︎
technically not true but not relevant for this use-case and not well documented. ↩︎
- Permalink: //oostens.me/posts/solaredge-se3000h-monitoring/
- License: The text and content is licensed under CC BY-NC-SA 4.0. All source code I wrote on this page is licensed under The Unlicense; do as you please, I'm not liable nor provide warranty.