Thursday, 6 July 2017

Implementing PATCH Verbs with Gson Jaxb and Spark Framework


TLDR; HttpURLConnection does not support PATCH, but we can sometimes use X-HTTP-Method-Override instead (this works with Spark Framework). I can Patch POJO using reflection, or deserialize to simple POJO and compare with null values instead



I had not planned to implement PATCH in my testing web service - at all, ever…

But Mark Turner’s comment on my previous blog post made me reconsider and now I’ll try and aim for as many HTTP verbs as I can.

Mark asked:

we did a similar thing with the jersey client, but found one flaw: it couldn’t handle PATCH requests which are useful when testing. Does gson handle this?

The short answer is, Gson can be used to handle this, and I present two ways of handling it below.

When implementing Patch, because of the architecture I’m using I have to solve 4 problems:

  • how to patch a POJO (Domain)
  • how to parse a JSON or XML request in a Patch request suitable to allow me to patch a POJO (API)
  • how to route a Patch request (HTTP REST)
  • how to send a Patch request (Test)

Generic Solution


I thought if I could find a generic solution then I should try that first as I can make fast progress that way.

How to patch a POJO (Domain Object)


For a generic solution the first thing that popped in to my mind was to use reflection to do this, and since my objects are currently pretty simple that seems reasonable. Given that my application is pretty low risk since it is to use for training or practice in how to test a REST API.

I thought I’d create a generic patcher that given a hashmap will use reflection to iterate over fields until it finds the field and then sets the value.

At the moment all my fields are String so that’s pretty simple to do.

@Test
public void canPatchAListicator(){

    ListicatorList list = new ListicatorList("first title", "first desc");

    Map<String,String> patches = new HashMap<String, String>();
    patches.put("title", "this is the new title");
    patches.put("createdDate", "1996-04-01-14-54-23");
    patches.put("description", "new description");


    ReflectionPatcher patcher = new ReflectionPatcher(list, ListicatorList.class);

    patcher.patch(patches);

    Assert.assertEquals("this is the new title", list.getTitle());
    Assert.assertEquals("1996-04-01-14-54-23", list.getCreatedDate());
    Assert.assertEquals("new description", list.getDescription());

    Assert.assertEquals(0, patcher.getFieldsInError().size());
    Assert.assertEquals(3, patcher.getFieldsPatched().size());

}

And the ReflectionPatcher is also pretty simple:

public class ReflectionPatcher {

    private final Object thing;
    private final Class theClass;

    private List<String> fieldsInError = new ArrayList<>();
    private List<String> fieldsPatched = new ArrayList<>();

    public ReflectionPatcher(Object thing, Class theClass) {
        this.thing = thing;
        this.theClass = theClass;

    }

    public void patch(Map<String, String> patches) {
        for(String fieldName : patches.keySet()){
            boolean hadToSetAccessible = false;
            Field declaration=null;
            Field field=null;

            try {
                declaration = theClass.getDeclaredField(fieldName);
                if(!declaration.isAccessible()){
                    hadToSetAccessible = true;
                    declaration.setAccessible(true);
                }
                declaration.set(thing, patches.get(fieldName));
                fieldsPatched.add(fieldName);

            } catch (NoSuchFieldException e) {
                e.printStackTrace();
                fieldsInError.add(fieldName + " - did not exist" );
            } catch (IllegalAccessException e) {
                e.printStackTrace();
                fieldsInError.add(fieldName + " - could not access");
            }finally {
                if(hadToSetAccessible=true && declaration!=null){
                    declaration.setAccessible(false);
                }
            }
        }
    }

    public List<String> getFieldsInError() {
        return fieldsInError;
    }

    public List<String> getFieldsPatched() {
        return fieldsPatched;
    }
}

Not the prettiest, not the most robust, but for my current needs this would work and if I stick to simple objects with String fields no nested elements I’ve pretty much got patching of POJOs sorted.

How to parse a JSON or XML request in a Patch request


JSON


My patch requests would look something like this in JSON:

PATCH /lists/sdfjwer-siwejr-2342sn
{"title":"title4","author":"author2"}

  • The GUID in the URI path
  • the part JSON in the body

And converting that to a hash with Gson is simple. And is often how I use Gson for tactical work when parsing.

return gson.fromJson(body, Map.class);

  • turn the Stringbody into a Map.

XML


My XML looks a little different since outer tags have to be named, not just ‘an object’

I could make it a <patch/> element which would keep it consistent, but since this is REST, the verb PATCH pretty much tells the server what it needs so I should be able to patch a list with:

PATCH /lists/sdfjwer-siwejr-2342sn
<list><title>title4</title><author>author2</author></list>

Jaxb is good for serializing to an object, but doesn’t want to work with a Map so I turned to JSON-java from Sean Leary for help. Again another package I’ve used for tactical automating in the past.

This allowed me to…

Map withHead = gson.fromJson(
                XML.toJSONObject(body).toString(),
                   Map.class);

Create some JSON from the XML and then use Gson to convert it to a Map.

Only one issue is that because of the extra ‘head’ <list> I have a nested map and all I want are the fields and values so I quickly thought:

ArrayList outer = new ArrayList();
outer.addAll(withHead.keySet());
return (Map<String, String>) withHead.get(outer.get(0));

  • Get the key of the parent and then return the submap with all its children.

A more elegant solution for this will occur to me as soon as I hit publish on this post, or I’ll learn it from the helpful comments.

I will investigate a more robust solution to this, but the reflection approach to the POJO amendment just needs a HashMap, so I’m done.

How to route a Patch request


This simply required me wiring up the Spark routing to the api method I created which used the ReflectionPatcher and the generic payload to map convertor.

patch("/lists/*", (request, response) -> {
    return api.patchList(new SparkApiRequest(request),
                         new SparkApiResponse(response)).getBody();
});

patch("/lists", (request, response) -> {
    response.status(405);
    return "";
});

How to send a Patch request


I thought this was going to be easy:

con.setRequestMethod("PATCH");

Set the request method on my HttpURLConnection but NO.

HttpURLConnection does not support PATCH.

Fortunately, Spark supports “X-HTTP-Method-Override”

And therefore if I send a POST, with a header of:

X-HTTP-Method-Override: PATCH

Spark will treat the request as a PATCH and route it accordingly.

A ‘Better’ non-generic way


For my purposes I can speed ahead with a non-generic way, but it would probably be better for me to have a more object based approach.

So I tried an experiment…

In my current code I have:

  • Domain Objects, these have methods and logic and cool stuff but they are POJO with no annotations etc.
  • Payload Objects, these are purely for serializing and deserializing (or marshalling and unmarshalling)

Here is an example:

@XmlRootElement(name = "list")
public class ListicatorListPayload {
    public String guid;
    public String title;
    public String description;
    public String createdDate;
    public String amendedDate;
}

So what would happen if I deserialized a partial JSON into that?

return gson.fromJson(body, ListicatorListPayload.class);

Well, because all the fields are set to null at the start, if there is nothing in the JSON string, then they stay null, so I effectively have a ‘PATCH’ object where the patches are the non-null values.

What happens with XML?

JAXBContext context = JAXBContext.newInstance(ListicatorListPayload.class);
Unmarshaller m = context.createUnmarshaller();
return (ListicatorListPayload)m.unmarshal(new StringReader(body));

Same result, a ListicatorListPayload where only the patched fields are non-null.

I could now create a method on my ListicatorList which takes a ListicatorListPayload as parameter and sets the fields which are non-null, a bit like a clone operation.

Or, and I suspect I would do this, create a ListicatorListPatcher which knows how to patch a domain object from a payload - I don’t really like the idea of my Domain Objects knowing anything about the API, but I’m happy for my API to know about the domain objects.

This seems like a more robust approach going forward, and if I introduce more complexity into my code base for object handling then I’ll probably use that approach.



I learned a surprising amount:

  • X-HTTP-Method-Override: PATCH as a header can allow some web servers to treat a POST method as a PATCH
  • HttpURLConnection does not like PATCH methods (hence the above)
  • Reflection is still useful for quick hacks
  • Gson deserializes to Map, Jaxb does not
  • Gson and Jaxb will both deserialize a PATCH message to null when the value isn’t present - which is handy

Wednesday, 5 July 2017

Architecting a Testable Web Service in Spark Framework

Architecting a Testable Web Service in Spark Framework


TLDR; Architecting a Web Service using Spark Framework to support more Unit testing and allow the inclusion of HTTP @Test methods in the build without deploying the application. Create API as a POJO. Start Spark in @BeforeClass, stop it in @AfterClass, make simple HTTP calls.



Background to the Spark and REST Web App Testing


I’m writing a REST Web App to help me manage my online training courses.

I’m building it iteratively and using TDD and trying to architect it to be easily testable at multiple layers in the architectural stack.

Previously, with RestMud I had to pull it apart to make it more testable after the fact, and I’m trying to avoid that now.

I’m using the Spark Java Framework to build the app because it is very simple, and lightweight and I can package the whole application into a stand alone jar for running anywhere that a JVM exists with minimal installation requirements on the user. Which means I can also use this for training.

TDD is pretty simple when you only have domain objects as they are isolated and easy to build and test.

With a Web app we face other complexities:

  • needs to be running to accept http requests
  • often needs to be deployed to a web/app server

Spark has an embedded Jetty instance so can start up as its own HTTP/App server, which is quite jolly. But that generally implies that I have to deploy it and run it, prior to testing the REST API.

If you look at the examples on Spark web site it uses a modern Java style with lambdas which makes it a little more difficult to Unit test the code in the lambdas.

Making it a little more testable


To make it a little more testable, in the lambda I can delegate off to a POJO:

get("/courses", (request, response) -> {
    return coursesApi.getCourses(request,response);
});

This was the approach I took in RestMud and it means, in theory, that I have a much smaller layer (routing) which I haven’t unit tested.

But the request and response objects are from the Spark framework and they are instantiated with an HttpServletRequest and HttpServletResponse therefore if I pass the Spark objects through to my API I create a much harder situation for my API Unit testing and I probably have to mock the HttpServletRequest and HttpServletResponse to instantiate a Spark Request and Response and I tightly couple my API processing to the Spark framework.

I prefer, where possible to avoid mocking, and I really want simpler objects to represent the Request and Response.

Simpler Interfaces


I’m creating an interface that my API requires - this will probably end up having many similar methods to the Spark Request and Response but won’t have the complexity of dealing with the Servlet classes and won’t require as robust error handling (since that’s being done by Spark).

get("/courses", (request, response) -> {
    return coursesApi.getCourses(
                         new SparkApiRequest(request),
                         new SparkApiResponse(response));
});

I’ve introduced a SparkApiRequest which implements my simpler ApiRequest interface and knows how to bridge the gap between Spark and my API.

I’m coding my API to use ApiRequest and therefore have created a TestApiRequest object which implements ApiRequest to use in my API Unit @Test methods e.g. and this is ugly at the moment, it is a first draft @Test method and haven’t refactored it to create the various methods that will help me make my test code more literate and readable

@Test
public void canCreateCoursesViaApiWithACourseList(){

    Gson gson = new Gson();

    CoursesApi api = new CoursesApi();

    CourseList courses = new CourseList();
    Course course = new CourseBuilder("title", "author").build();
    courses.addCourse(course);

    ApiRequest apiRequest = new TestApiRequest();
    ApiResponse apiResponse = new TestApiResponse();

    String sentRequest = gson.toJson(courses);

    apiRequest.setBody(sentRequest);

    System.out.println(sentRequest);

    Assert.assertEquals("", api.setCourses(apiRequest, apiResponse));
    Assert.assertEquals(201,apiResponse.getStatus());

    Assert.assertEquals(1, api.courses.courseCount());

}

In the above I create the domain objects, use Gson to serialise them into a payload, create the TestApi request and response, and pass those into my API.

This has the advantage that the API is instantiated as required for testing - Spark is static so is a little harder to control for Unit testing.

I also have direct access to the running application objects so I can check the application state in the Unit test, which I can’t do with an HTTP test, I would have to make a second request to get the list of courses.

This allows me to build up a set of @Test methods that can drive the API, without requiring a server instantiation.

But this leaves the routing and HTTP request handling as a gap in my testing.

Routing and HTTP request handling testing


With RestMud I take a similar approach but I’m working a level down where the API calls the Game, and I test at the Game. Here I haven’t introduced a Course Management App level, I’m working at an API level. I might refactor this out later.

With RestMud I test at the API with a seperate set of test data, which is generated by walkthrough unit tests at the game level. (read about that here).

I wanted to take a simpler approach with this App, and since Spark has a built in Jetty server it is possible for me to add HTTP tests into the build.

For some of you decrying “That’s not a Unit Test” that’s fine, I have a class called IntegrationTest, which at some point will become a package filled with these things.

To avoid deploying I create an @Test method which starts and stops the Spark jetty server:

@BeforeClass
public static void createServer(){

    RestServer server = new RestServer();
    host = "localhost:" + Spark.port();
    http = new HttpMessageSender("http://" + host);
}

@AfterClass
public static void killServer(){
    Spark.stop();
}

I pushed all my server code into a RestServer object rather than have it all reside in main, but could just as easily have used:

    String [] args = {};
    Main.main(args);
    // RestServer server = new RestServer();

Because Spark is statically created and managed so as soon as I define a routing, Spark starts up and creates a server and runs my API.

Then it is a simple matter to write simple @Test methods that use HTTP:

@Test
public void serverIsRunning(){

    HttpResponse response = http.get(ApiEndPoints.HEARTBEAT);
    Assert.assertEquals(204, response.statusCode);
    Assert.assertEquals("", response.body);
}

I have an HttpMessageSender abstraction which also uses an HttpRequestSender.

  • HttpMessageSender is a more ‘logical’ level that builds up a set of headers and has information about base URLs etc.
  • HttpRequestSender is a physical level

In my book Automating and Testing a REST API I have a similar HTTP abstraction and it uses REST Assured as the HTTP implementation library.

For my JUnit Run Integration @Test, I decided to drop down to a simpler library and avoid dependencies so I’m experimenting with the Java .net HttpURLConnection.

How is this working out?


Early days, but thus far it allows me to TDD the API functionality with @Test methods which create payloads and set headers which I can pass in to the API level.

I can also TDD the HTTP calls, and this helps me mitigate HTTP routing errors and errors related to my transformation of Spark Request and Responses to API Request and Responses.

This is also a lot faster than having a build process for the Unit tests, and then package and deploy, startup app, run integration tests, close down app.

This also means that (much though I love using Postman), I’m not having to manually interact with the API as I build it. I can make the actual HTTP calls as I develop.

This does not mean that I will not manually interact with the application to test it, and that I will not automate a separate set of HTTP API execution. I will… but not yet.

At some point I’ll also release the source for all of this to GitHub.

Wednesday, 14 June 2017

An introduction to Refactoring Java in IntelliJ using RestMud as an Example


TL;DR Never too late to refactor. Do it in small chunks. Protected by tests. Using IDE to refactor.





My RestMud game grew organically, I do have a fair bit of Unit testing, but I also perform more integration testing than unit testing.

The good thing about this approach is that the integration testing represents requirements i.e. in this game I should be able to do X, and my integration tests at a game engine level create test games and check conditions in there.

These tests rarely have to change when I amend my code.

The side-effect of this type of testing is that the classes don’t have to be particularly good, so I have a lot of large classes and not particularly good organisation.

I’m now refactoring the classes and organising the code to have 4 main sections:

  • Game Engine
  • Games
  • API
  • GUI

At the moment I’m concentrating on the Game Engine.

I have a large main class called MudGame and I’m splitting that into smaller classes now.

Refactoring from Map to POJO


As an example my MudGame used to have a Map for locations, and collectables and messages.

This meant that I had 4 or 5 methods for each of these collections in my MudGame, I have now only a few high level methods in Game and most of the code has moved to the Locations, or Collections object.

As I was doing this I had to make a decision, do I make a public final field, or do I create a private field, with an accessor method.

I initially chose public final and amended the code, and then changed my mind to have an accessor method.

I don’t worry too much about this because it is easy to use IntelliJ refactoring to rename and wrap fields in accessor methods.

private Map<String, MudLocation> locations = new HashMap<>();

To an object that manages locations, which contains all the code methods that were on MudGame

public final Locations gameLocations = new Locations();

I chose to make the field public initially, then I refactored using “Encapsulate as Method”:

private final Locations gameLocations = new Locations();

and

    public Locations getGameLocations() {
        return gameLocations;
    }

Refactoring Methods to Inline Code


Sometimes when I have a method that is small and doesn’t really add any value because I delegate all the functionality off to another Object, I might choose to inline it:

    public MudLocationObject getLocationObject(String nounPhrase) {
        return getLocationObjects().get(nounPhrase);
    }

When I inline this then anywhere in the code that had:

MudLocationObject locationObject = game.getLocationObject(thingId);

Becomes:

MudLocationObject locationObject = game.getLocationObjects().get(thingId);

Some Tips for Refactoring


  • requirement level tests should not have to change during refactoring
  • make sure you have tests before you refactor
  • don’t worry too much about naming or field/method choices during initial coding because it is easy to refactor later
  • use IDE refactoring where possible
  • when code gets ugly, get refactoring
  • refactor in small chunks, keep chipping away,
  • refactor low hanging fruit first as it makes it easy to see what comes next
  • group code together to loosely organise prior to refactoring into new classes
  • Refactor Classes to represent semantics as well as helping organising code

It’s never too late to refactor your code.


Bonus Youtube Video


See also the accompanying YouTube Video:



In the video you’ll see:

An introduction to Refactoring Java in IntelliJ with a live demo using RestMud Game. I talk you through what refactoring is, and show examples of in built refactoring functionality in IntelliJ.

  • An introduction to refactoring
  • Basic Refactoring techniques and approaches explained
  • Refactoring from fields to methods with “Encapsulate Field”
  • Run tests after each refactoring
  • Check in code to version control frequently to allow reverting if things go wrong
  • Demonstration of refactoring
  • Explanation of intermittent Unit Test Execution
  • Sometimes as we refactor we discover we are creating duplicate code. When that happens, stop and decide if the existing code is good enough.
    -Try to avoid creating code that you aren’t using yet. You have to maintain it, and there are no tests,
    a nd you probably won’t use it in the future anyway!
  • Refactoring to Inline methods to remove methods completely. Remove the method and replace invocations
    with the code in the method
  • Reflect on your refactoring. Time to stop? Good enough to checkin? More to do?

Thursday, 13 April 2017

JSoup Tip How to get raw element text with newlines in Java - Parsing HTML and XML with JSoup

TL;DR with JSoup either switch off document pretty printing or use textNodes to pull the raw text from an element.



A quick tip for JSoup.

I wanted to pull out the raw text from an HTML element and retain the \n newline characters. But HTML doesn’t care about those so JSOUP normally parses them away.

I found two ways to access them.
  • switching off pretty printing
  • using the textNodes

Switching off Pretty Printing

When you parse a document in JSoup you can switch off the prettyPrint


Document doc = Jsoup.parse(filename, "UTF-8", "http://example.com/");
doc.outputSettings().prettyPrint(false);

Then when you access the html or other text in an element you can find all the \n characters in the text.

String textA = element.html();

Use the textNodes

This approach works regardless of whether you have prettyPrint on or off:

String text = "";
for(TextNode node : element.textNodes()){
    text = text + node + "\n\n";
}

If you accidentally use both methods then you might get confused.

I think I prefer the second approach because it works regardless.

You can find code that illustrates this on github in the TwineSugarCubeReader.java file


See also the accompanying YouTube Video:


Friday, 17 March 2017

Mistakes using Java main and examples of coding without main

TL;DR A potentially contentious post where I describe how I've survived without writing a lot of Java main methods, and how learning from code that is often driven by a main method has not helped some people. I do not argue for not learning how to write main methods. I do not argue against main methods. I argue for learning them later, after you know how to code Java. I argue for learning how to use test runners and built in features of maven or other build tools to execute your @Test code.


Monday, 5 December 2016

Let's Code - Binary Chopifier - Just Enough Code and Tooling to Start

TLDR; “Let’s code a Binary Chopifier” which I plan, code as prototype to plan in an @Test method, test interactively, experiment, and release as an @Test.




I want to create a few more examples of “Java in Action” and I’m doing that in some YouTube videos and blog posts that I think of as “Let’s Code”. First up is “Let’s code a Binary Chopifier” which I plan, prototype to plan, test interactively, experiment, and release to Github.

Let’s code a Binary Chopifier



When I was recording - Let’s Explore Google Search I made a note to write a binary chopifier.

https://www.youtube.com/watch?v=b3izXqERlqo

In this series of videos we are going to create the binary chopifier and add it to Test Tool Hub.

Plan

First thing I did was make notes on what I wanted to support me in testing:

    Tool idea: binary chopper!
    start: 1024 end: 2048
    result

    chop: value (inc)
-------------------
        01: 1536  (512)
        02: 1792 (256)
        03: 1920 (128)
        04: 1984 (64)
        05: 2016 (32)
        06: 2032 (16)
        07: 2040 (8)
        08: 2044 (4)
        09: 2046 (2)
        10: 2047 (1)
        11: 2048 (0)

Explaining Binary Chop:
  • I try a value of length 2048
  • System doesn’t accept it because it is too long
  • I want to find the limit
  • I try 1024 (I binary chop 2048) and if that is accepted then
  • I try 1536 (midway between 1024 and 2048), and if that is accepted then
  • etc. until I narrow down on the value that is the limit
And if you watch the video you’ll see my mental arithmetic process was quite slow. I could spend the time boosting my mental arithmetic, or I could write a tool to help me.

Guess which is easier?

So I write a tool.

Thinking through an algorithm

The plan above represents a basic output to support me as the tester.

Really all I want is the chop and the value, but I used inc to help me calculate the chops
  • So I calculate the difference between the start and end: 1024
  • Divide it by 2 (chop) to get 512 then I add that to start (inc) and get 1536
  • And keep going.

Start by writing a ‘@Test’ which does this

I start by writing an @Test method which implements this algorithm and I can see if it works or not

@Test
public void calculateBinaryChopForStartAndEndFromThoughtAlgorithm(){

  int start = 1024;
  int end = 2048;
  int choppoint=start;
  int inc = start;

  while(inc > 0){

  inc = (end-choppoint)/2;
  choppoint=choppoint+inc;
  System.out.println(String.format("%d (%d)", choppoint, inc));
  }

}

Which gives me the output

1536 (512)
1792 (256)
1920 (128)
1984 (64)
2016 (32)
2032 (16)
2040 (8)
2044 (4)
2046 (2)
2047 (1)
2047 (0)

Which isn’t what I was looking for, but makes sense since on the last increment it is zero.
Perhaps then, inc isn’t inc it is diff between end and chop point.

So rather than ‘add to’ the start, I should ‘take away’ from the end

    @Test
    public void calculateBinaryChopForStartAndEnd(){

        int start = 1024;
        int end = 2048;
        int choppoint=start;
        int inc = start;

        while(inc > 0){

            inc = (end-choppoint)/2;
            choppoint=end-inc;
            System.out.println(String.format("%d (%d)", choppoint, inc));
        }

    }

Which gives me my original plan:

1536 (512)
1792 (256)
1920 (128)
1984 (64)
2016 (32)
2032 (16)
2040 (8)
2044 (4)
2046 (2)
2047 (1)
2048 (0)

But since I’m working from the end, I’m wondering if what I actually do is just keep halfing the difference:

@Test
public void calculateBinaryChopForStartAndEndHalfDifference(){

  int start = 1024;
  int end = 2048;
  int diff = end-start;

  while(diff > 0){
  diff = diff/2;
  System.out.println(String.format("%d (%d)", end-diff, diff));
  }
}

Which gives me:

1536 (512)
1792 (256)
1920 (128)
1984 (64)
2016 (32)
2032 (16)
2040 (8)
2044 (4)
2046 (2)
2047 (1)
2048 (0)

And is much simpler.

And since this ‘test’ is a useful ‘tool’ for me - I’ll stop there for this video. And next I’ll start refactoring this out into a library for binary chopping so that I can then use that in the Test Tool Hub.


Friday, 21 October 2016

How to create and release a jar to maven central

TLDR; The instructions on apache and sonatype site are pretty good to help get library released to maven central. But you’ll need to learn about pgp signing and might need more rigour in your pom file. A fairly painless learning process that I recommend you go through and release something to the world.




I spend most of my time with Java writing stand alone applications to support testing or code that we run as part of CI. I haven’t had to create a library that I make accessible to other people through maven central.

I thought it was about time I did so.

In this post I’ll describe what I did and how I got a .jar in Maven Central.

What is the Library?

As part of my Selenium WebDriver online training course I created a ‘driver manager’ to allow code to more easily run across different browsers.
It works fine for the context of my course.
Over time I’ve started splitting the course source into multiple parts:
And I’ve had to copy the Driver.java into the continuous integration project.
I decided to pull it out into a separate library and make it accessible via maven central, that way it will be easier for people taking the course to use the Driver class in their own code.
And I can start maintaining it as a project on its own merits with better code and better flexibility, rather than something that just supports the course.

Summary of what to do?

What follows is a ‘checklist’ created from my notes about how I released.
Now that I have a groupid that will synchronise to maven central, it should be a simpler process if I want to create any future libraries.

A bit more detail

The documentation I linked to is pretty good. I mostly just copied the information from there.
And you can see the results in the released library code:
And the sample project that uses the library:

Changed my code to use minimal libraries

One change I made to the library pom.xml that is different from my normal use of the code in projects.
I decided not to include the full version of Selenium WebDriver - which I normally do when I use it:
i.e.
<dependency>
   <groupId>org.seleniumhq.selenium</groupId>
   <artifactId>selenium-server</artifactId>
   <version>3.0.1</version>
</dependency>
Instead I wanted the minimum I could add, since I know that the projects using it will be incorporating the full version of Selenium WebDriver.
So I just used the Java Interface:
<dependency>
    <groupId>org.seleniumhq.selenium</groupId>
    <artifactId>selenium-java</artifactId>
    <version>3.0.1</version>
</dependency>

Configuring repositories in the pom.xml

I haven’t had to do this for a long time. I vaguely remember doing this in the past as a workaround for some local issue we had.
In order to access the -SNAPSHOT release version of the library I have to have the repository configured in my pom.xml
<!-- to use snapshot versionsof the driver manager we need to use the OSS nexus repo -->

<repositories>
    <repository>
        <id>osshr</id>
        <name>OSSHR Staging</name>
        <url>https://oss.sonatype.org/content/repositories/snapshots</url>
    </repository>
</repositories>
I imagine that this might prove a useful workaround if I ever encounter a site that has configured the maven config via settings that we are unable to access easily.

Deploy was easier than I thought

I haven’t used the release deploy in maven before. And the instructions had a whole bunch of commands:
//perform a release deployment to OSSRH with

mvn release:clean release:prepare

//by answering the prompts for versions and tags, followed by

mvn release:perform
But in the end I didn’t have to do this.
I changed the version to remove -SNAPSHOT and it ‘released’ when I did a mvn clean deploy

Tagging a release on Github

I haven’t ‘released’ on Github before so I created a release via the github GUI on the releases page

Gotchas

What went wrong?

I tried to use a groupid that I don’t own

I’ve been pretty laissez-faire with my groupids in my projects and high level package names because I’ve never released one before.
But to use maven central you need to have a domain that you own.
And someone has snapped up the .com that I often use in my code, so I needed to use the .co.uk that I own.
I might well start changing the code that I create to use this new groupid now :)

I put my group id the wrong way round

I tried mvn clean deploy for a snapshot release and I received:
[ERROR] Failed to execute goal org.sonatype.plugins:nexus-staging-maven-plugin:
1.6.7:deploy (injected-nexus-deploy) on project selenium-driver-manager:
Failed to deploy artifacts: Could not transfer artifact co.uk.compendiumdev
:selenium-driver-manager:jar.asc:javadoc:3.0.1-20161020.083347-1 from/to
ossrh (https://oss.sonatype.org/content/repositories/snapshots):
Access denied to: https://oss.sonatype.org/content/repositories/snapshots/co/uk/
compendiumdev/selenium-driver-manager/3.0.1-SNAPSHOT/
selenium-driver-manager-3.0.1-20161020.083347-1-javadoc.jar.asc,
ReasonPhrase: Forbidden. -> [Help 1]
I checked that my credentials were correct by logging into the oss nexus system
My issue was that instead of using groupid
  • uk.co.compendiumdev
I mistakenly used:
  • co.uk.compendiumdev
So don’t do that.

I forgot to release the gpg key

I forgot to release the gpg key when I created it, so I ended up trying to do a final release and seeing the following error during mvn clean deploy
[ERROR]     * No public key: Key with id: (xxxxxxxxxxxxx) was not
able to be located on
&lt;a href=http://pool.sks-keyservers.net:11371/&gt;http://pool.sks-keyservers.net:11371/&lt;/a&gt;.
Upload your public key and try the operation again.
Make sure you do this early in the process.
Also I had to wait 10 - 20 minutes before it was accessible.
To check, visit the site you uploaded to and then search for the key.
I had to search for the key id with 0x in front of it i.e.
  • 0x740585A4
  • and not 740585A4
http://pool.sks-keyservers.net/pks/lookup?search=0x740585A4
When it was available from the search, then I could mvn clean deploy

Future work

I still have a lot to learn here.
As a beginner:
  • I’ve added a lot of ‘stuff’ to the pom.xml that I don’t fully understand and need to research
  • I’m sure I’m taggging the release on Github inefficiently
  • I’ve only done one release so I’m not sure if it is fully setup yet
  • I do this manually and haven’t added a CI deploy - when I do I’ll read the sonatype blog post more carefully
And I have an actual todo list:
  • I need to document the library and example in more detail if I want usage to spread beyond my course.
  • I need to amend my wdci and course code to use the library
But, it was a lot less daunting than I expected and the documentation was pretty clear, and the OSSHR team were very helpful in getting me setup, I was very impressed given that the oss staging repositories and syncing to maven central is a free service.
Hope that helps someone on the road to making their release. All in all it was a good learning experience.

References