blog

Amazon Prime, Synology and photo backup

Few weeks ago I discovered that Amazon gives an unlimited space on its amazon drive foto service for all Amazon Prime members! If you like photography it might be interesting to backup all your pics on this free space automatically using your Synology NAS.

No limitation for all photos: jpg, tiff,png, and even all image formats and all RAW formats (Canon, Nikon, Sony etc..)

Synology provides an app called “Cloud Sync” which can be found on their Package Center

Once installed you can easily configure a daily/weekly backup task which performs all checks and synchronization between your local folder containing images (on the NAS) and the Amazon Drive Foto. For example I have a Lightroom library on my Synology called /photo. On the amazon drive foto I found the /Immagini folder already created by amazon drive at first login. The only thing I had to do was to set this simple task. It copies from /photo to /Immagini.


If you click on the Edit button of the image above you can nicely define the Synchronization Direction:  Bidirectional, Download remote changes only, Upload local changes only.

I selected “Upload local changes only” since I want Amazon Drive as a carbon copy of the NAS, but that’s just my choice, also bidirectional would be fine for copying data from NAS to Amazon Drive Foto and viceversa. What really surprised me it’s that this service works even with RAW files, so it recognize all different raw files and allows you to store them on an unlimited cloud space. It’s an awesome feature for all photographers and raw shooters!

 

GeoJson serialization issues with Spring Data MongoDb

Since few months I’m having fun on developing a middleware application with Java 8, Spring Boot (1.5.x) and the Spring Data MongoDb module. Spring Data MongoDb supports spatial data type and it also support GeoJson format, which means you can send a geometry in GeoJson format (point, linestring, polygon, etc…) and Spring Data MongoDb will be able to parse and store it correctly in the defined collection.
Once stored Spring Data Mongodb module will be able to treat it like a spatial data and you’ll be able to run spatial queries like Within, InWithin, Near, IsNear, MaxDistance, etc..

Everything seemed awesome but I had a simple and stupid issue. While retriving data using a simple RestController I wasn’t able to get back Geometries in GeoJson format… All geometries were always given back in the Spring Data MongoDb format (X and Y properties) instead of the GeoJson format .

Here’s the simple Controller that I did:


@RequestMapping(value = "/listLineStrings", method = RequestMethod.GET)
public ResponseEntity<List<LineStringGeometry>> listLineStrings() {
   List lineStringGeometries = lineStringGeometryService.findAll();
   new ResponseEntity(lineStringGeometries, HttpStatus.OK);
}

Here the result after calling /listLineStrings endpoint (Spring Data MongoDb representation of a LineString):

{
	"id": "599d1d0bd3466521a8f7be7f",
	"geom": {
		"type": "LineString",
		"coordinates": [{
				"x": 10,
				"y": 56,
				"type": "Point",
				"coordinates": [
					10,
					56
				]
			},
			{
				"x": 10,
				"y": 57,
				"type": "Point",
				"coordinates": [
					10,
					57
				]
			}
		]
	}
}

Here’s what I expected to have (GeoJson):

{    
    "id": "599d1d0bd3466521a8f7be7f",    
    "geom": {
        "type": "LineString",
        "coordinates": [
				[10, 56],[10, 57]]
            }        
}

After googling and stackoverflowing a bit, I didn’t find exactly the answer I wanted so I developed this 3 custom Serializers for the 3 geometry types I’m using in the project: Points, LineStrings and Polygons.

What I did was to overload the serialization behavior of Spring Data MongoDb (serialization means from Entity to Json, deserialization is the opposite). Here below the 3 classes developed (put it on a util or serializer package for a better organization of the project)

GeoJsonPoint

import com.fasterxml.jackson.core.JsonGenerator;
import com.fasterxml.jackson.core.JsonProcessingException;
import com.fasterxml.jackson.databind.JsonSerializer;
import com.fasterxml.jackson.databind.SerializerProvider;
import org.springframework.data.mongodb.core.geo.GeoJsonPoint;

import java.io.IOException;

/**
 * Created by alessandro.rosa on 23/08/2017.
 */
public class GeoJsonPointSerializer extends JsonSerializer<GeoJsonPoint> {
    @Override
    public void serialize(GeoJsonPoint value, JsonGenerator gen, SerializerProvider serializers) throws IOException, JsonProcessingException {
        gen.writeStartObject();
        gen.writeStringField("type", value.getType());
        gen.writeArrayFieldStart("coordinates");
        gen.writeObject(value.getCoordinates());
        gen.writeEndArray();
        gen.writeEndObject();
    }

}

GeoJsonLineString

import com.fasterxml.jackson.core.JsonGenerator;
import com.fasterxml.jackson.core.JsonProcessingException;
import com.fasterxml.jackson.databind.JsonSerializer;
import com.fasterxml.jackson.databind.SerializerProvider;
import org.springframework.data.geo.Point;
import org.springframework.data.mongodb.core.geo.GeoJsonLineString;
import org.springframework.data.mongodb.core.geo.GeoJsonPolygon;

import java.io.IOException;

/**
 * Created by alessandro.rosa on 23/08/2017.
 */
public class GeoJsonLineStringSerializer extends JsonSerializer<GeoJsonLineString> {

    @Override
    public void serialize(GeoJsonLineString value, JsonGenerator gen, SerializerProvider serializers) throws IOException, JsonProcessingException {
        gen.writeStartObject();
        gen.writeStringField("type", value.getType());
        gen.writeArrayFieldStart("coordinates");
        for (Point p : value.getCoordinates()) {
                gen.writeObject(new double[]{p.getX(), p.getY()});
        }
        gen.writeEndArray();
        gen.writeEndObject();
    }
}

GeoJsonPolygon

import com.fasterxml.jackson.core.JsonGenerator;
import com.fasterxml.jackson.core.JsonProcessingException;
import com.fasterxml.jackson.databind.JsonSerializer;
import com.fasterxml.jackson.databind.SerializerProvider;
import org.springframework.data.geo.Point;
import org.springframework.data.mongodb.core.geo.GeoJsonLineString;
import org.springframework.data.mongodb.core.geo.GeoJsonPolygon;

import java.io.IOException;

/**
 * Created by alessandro.rosa on 23/08/2017.
 */
public class GeoJsonPolygonSerializer extends JsonSerializer<GeoJsonPolygon> {

    @Override
    public void serialize(GeoJsonPolygon value, JsonGenerator gen, SerializerProvider serializers) throws IOException, JsonProcessingException {
        gen.writeStartObject();
        gen.writeStringField("type", value.getType());
        gen.writeArrayFieldStart("coordinates");
        for (GeoJsonLineString ls : value.getCoordinates()) {
            gen.writeStartArray();
            for (Point p : ls.getCoordinates()) {
                gen.writeObject(new double[]{p.getX(), p.getY()});
            }
            gen.writeEndArray();
        }
        gen.writeEndArray();
        gen.writeEndObject();
    }
}

After this the only thing I had to do was to use the annotation

@JsonSerialize(using = GeoJsonPointSerializer.class)

on the entity with the property of type GeoJsonPoint

like this:

@Document(collection = "pointGeometries")
public class PointGeometry{

    @Id
    private String id;

    @JsonSerialize(using = GeoJsonPointSerializer.class)
    @GeoSpatialIndexed(type = GeoSpatialIndexType.GEO_2DSPHERE)
    private GeoJsonPoint geom;

    public PointGeometry() {
    }

    public PointGeometry(GeoJsonPoint geom) {
        this.geom = geom;
    }

    {getters and setters omitted}

Similar annotation for the other GeoJsonPolygons or GeoJsonLineStrings

      @JsonSerialize(using = GeoJsonLineStringSerializer.class)
      @GeoSpatialIndexed(type = GeoSpatialIndexType.GEO_2DSPHERE)
      private GeoJsonLineString geom;
      @JsonSerialize(using = GeoJsonPolygonSerializer.class) 
      @GeoSpatialIndexed(type = GeoSpatialIndexType.GEO_2DSPHERE)
      private GeoJsonPolygon geom;

Since it tooks me a bit of time to solve this issue I hope to be usefull to other developers with this post.
Cheers!!

Clustering a file system with CentOS 7

This time I wanted to build up a Cluster for GeoServer. Suppose to have many requests to serve and tasks processed are really high cpu intensive, what you would initially think as first option is to put 2 GeoServers and with a simple proxy balancer switch the traffic 50-50 to each of the 2 nodes. That’s correct, but since the 2 instances have their own “installation” directory they could theoretically provide different data, styles, shapefiles, users and so on.

As explained in many documents or books (e.g “Geoserver. Beginner’s Guide“)  the concurrence is not a problem for data who reside on DBs, but what happens for other useful data? (shapefiles and styles in particular). You should pay twice the space, by duplicating them, and moreover you should keep care of syncing all of them!

What was suggested, even from the link of above, was to think about a clustered file system. So how could you do it? Trash away all the documentation about ricci, lucci and so on these stuffs are outdated.

Read More

A bit late but here my little beast

Here my new pc. As every respectable nerd there have to be some overclocked stuff. Currently it runs smoothly and silently at 4.0 Ghz :). (Factory Frequency is 3.4 Ghz). The case, the air carbide 540 is huge it doesn’t fit easily on my desk , but the airflow is really impressive. Any high cpu load is cooled in few seconds, really impressive.

PC Specs:

  • Case Corsair Carbide Air 540 Black
  • Seasonic M12II-750 EVO
  • MSI Z87-G45 Gaming
  • Intel i5-4670K
  • Sapphire Radeon R9 290 4Gb Tri-X OC Version
  • Crucial Ballistix Tactical 16 GB DDR3 1866Mhz PC3-14900
  • Samsung 840 Evo 250 Gb
  • 2 x Seagate Barracuda 14.2 2TB (Raid1)
  • 2 x PB248Q LCD IPS Monitor
  • Cooler Master Storm Devastator Keyboard
  • Mionix Castor Mouse
  • Corsair H90
  • NZXT CWhite LED Sleeve

RadScheduleView disable keyboard editing

Since few days I was searching for a way to disable the editing via keyboard on the really nice RadScheduleView control provided by Telerik. I had some issues while developing an application, in particular after pressing “Enter” key, I got triggered twice the CreatedAppointment event, and that caused some unexpedect entries. Since the inline editing was not so important, I wanted to disable it to keep it really simple. Here’s the way:

http://www.telerik.com/help/wpf/radscheduleview-features-inline-editing.html#HowTo_Enable_Disable

telerik:RadScheduleView x:Name="scheduleView" IsInlineEditingEnabled="False"

Hope it will help other developers!

Visual Studio and Npgsql error while compiling

During the last 3 days I’ve spent sometime on an annoying issue concerning Visual Studio 2012 and Npgsql ( a .Net Data Provider for Postgresql).

If you’ve ever had something similiar :

Error Message: Parser Error Message: Assembly ‘Npgsql, Version=2.1.12.0, Culture=neutral, PublicKeyToken=5d8b90d52f46fda7’ not found

while debugging or running your application, you might need to add the reference of the Npgsql.dll into your GAL.

If you have Visual Studio 2012, the related GAL is located on:

C:\Program Files (x86)\Microsoft SDKs\Windows\v8.1A\bin\NETFX 4.5.1 Tools\gacutil.exe.

Visual Studio 2010 has the same path with the exception of the version which should be V.7.0 (if patched 7.1). Visual studio 2013 should has the same path with the version V.9.0. Once you found it, you should run the command below to insert into the Global Assembly Cache (GAL), the missing reference. (adding the Reference from the Solution, it’s not enough)

What you need to do is to open a command prompt, in my case the Npgsql is located in C:\Npgsql, and run these commands:

#cd C:\Program Files (x86)\Microsoft SDKs\Windows\v8.1A\bin\NETFX 4.5.1 Tools\
#gacutil.exe /i C:\Npgsql\Npgsql-2.1.2-net40\Npgsql.dll

2014-08-04 09_48_25-Amministratore_ C__Windows_system32_cmd.exe

As you can see from the reference page on MSDN, \i installs the reference as required. A good check would be to run /l after inserted, in order to check if it has been added correctly or not.

New fancy graphs in OpenNMS 1.12

Since 2 weeks now I’m running the newest version of OpenNMS, the 1.12.1 . There is a really nice improvement that this new release brought: fancy/awesome graphs!! Now they seems more “precise” and even the perceptions of colours give at a first sight an idea of what’s happening: bad or good.

In the pic above you can have a look at the CPU graph from a Fortigate. What a nice view.

Now I would say that it’s definitely the way to follow, upgrade your graphs, trash them all and get new ones.
P.s: I would like to thanks the HP Raid controller that screwed up all the filesystem that day, otherwise I wouldn’t have done the installation of OpenNMS from the scratch and get as result this nice view 😉

Event translator: enrich OpenNMS notifications

Sometimes it might happen that you need “enriched” notifications from OpenNMS. Suppose you would like to send a notification including an information which is present on the asset view of your node: the Event Translator of OpenNMS is what you’re looking for.

At this Event Translator web page you can find the official documentation from OpenNMS but for a full implementation I suggest you to have a look at the following post.

Scenario: I will enrich the notification just for a specific category of devices that I use to call TimeClock. When one of these devices is down, It will send an enriched notification that include node label, ip address of the primary snmp interface and the description stored on the asset of the node. By default on OpenNMS notifications you won’t get these 2 additional information.

Read More

SLES 11 HAE guide

Since last month I’m studying a nice topic like the “High Availability” on Linux OS. If you have a SLES 11 and a license for High Availability Extension add-on, this guide could be really helpful for a good understanding. The lack of official and well documented procedures it’s really common for these graceless topics, but this one with its 495 pages looks awesome.

suse-linux-logo-wallpaper

It’s a really comprehensive manual accompanied with simple examples and many pictures & screenshots. If you want to get familiar with words like Corosync, OpenAIS, STONITH and other, please give it a try. In case you have previous versions like the HA Extension SP1, most of commands seem to work perfectly. From the release notes of SP2 you can see that big changes to components haven’t been done, and that’s what care most 😉 (especially for enterprises environments)

Thanks SuSE for the great job, that’s why I like this distro for Enterprise purpose! Here the link to the page for the .pdf file.

[Update: the guide has been updated on June 26th 2013!!}

Monitor the cluster on Fortinet devices. New OID

Updating firmwares from Fortinet v4.0 MR2 to MR3, the “cluster-check” was no more working on OpenNMS. After searching on Fortinet Knowledge base pages I figured out that Fortinet has changed the OID for the cluster checks. This is the new OID:

### cluster is up and running ###
[root@nms2 ~]# snmpwalk -v2c –c public  fwIP 1.3.6.1.4.1.12356.101.13.2.1.1.1
SNMPv2-SMI::enterprises.12356.101.13.2.1.1.1.1 = INTEGER: 1
SNMPv2-SMI::enterprises.12356.101.13.2.1.1.1.2 = INTEGER: 2
### cluster is down ###
[root@nms2 ~]# snmpwalk -v2c –c public  fwIP 1.3.6.1.4.1.12356.101.13.2.1.1.1
SNMPv2-SMI::enterprises.12356.101.13.2.1.1.1.1 = INTEGER: 1

To add the new service we just need to add this monitor on the file poller-configuration.xml.

<service name="FortinetCluster" interval="300000"
            user-defined="false" status="on">
            <parameter key="retry" value="1"/>
            <parameter key="timeout" value="3000"/>
            <parameter key="port" value="161"/>
            <parameter key="oid" value="1.3.6.1.4.1.12356.101.13.2.1.1.1"/>
            <parameter key="operator" value="&lt;"/>
            <parameter key="operand" value="3"/>
            <parameter key="walk" value="true"/>
            <parameter key="match-all" value="count"/>
            <parameter key="minimum" value="2"/>
            <parameter key="maximum" value="2"/>
</service>

As a final step the only thing missing is to add this service to the interested nodes (your updated Fortigates).