Anthony Bennett’s historically poor start

(I normally don’t write about sports on here, but I watch it from a distance and have as little emotional investment.

I have given up on Anthony Bennett.

Giving up on Anthony Bennett’s potential within 3 months of the season seems premature but no respectable* player in NBA’s modern era (86-87 to present, that’s all I can find stats for) has started out as poorly or had as long of a sustained stretch of mediocrity as Anthony Bennett has had.

Quantifying someone’s contribution to an nba game is subjective, but one stat that is out there is a someone’s game score. I know it’s not perfect because it doesn’t highlight the contributions of defensive-minded players like bruce bowen, kurt thomas, Ben wallace, etc. But it’s used quite often and represents how a player contributes to a game.
(here’s the formula for it: http://www.basketball-reference.com/about/glossary.html search for GmSc)

Let’s take a look at Anthony Bennett so far this year.

http://www.basketball-reference.com/players/b/bennean01/gamelog/2014/

He has only had 1 game with a game score above 5 (dec. 31, 2013, vs. pacers) http://www.basketball-reference.com/boxscores/201312310IND.html and in all but 2 these 27 games, he’s played more than 5 minutes.
Anthony Bennett has had streaks of 15 and 10 consecutive games with this mediocre performance of game score less than 5.

So, here’s a list of players who have had the most consecutive games of a game score below 5 and have at least played 5 minutes in a game… The players that Bennett has had company with:

1988-89 through the 93-94 seasons

94-95 through 99-2000

00-01 through 04-05

04-05 through 08-09

http://www.basketball-reference.com/play-index/pstreak.cgi?request=1&player_id=&year_min=2005&year_max=2009&team_id=&opp_id=&is_playoffs=N&game_location=&is_starter=&c1stat=mp&c1comp=ge&c1val=5&c2stat=game_score&c2comp=le&c2val=5&c3stat=&c3comp=ge&c3val=&c4stat=&c4comp=ge&c4val=

08-09 through 13-14

http://www.basketball-reference.com/play-index/pstreak.cgi?request=1&player_id=&year_min=2010&year_max=2014&team_id=&opp_id=&is_playoffs=N&game_location=&is_starter=&c1stat=mp&c1comp=ge&c1val=5&c2stat=game_score&c2comp=le&c2val=5&c3stat=&c3comp=ge&c3val=&c4stat=&c4comp=ge&c4val=

Heck, if we remove the 5 minute requirement, Bennett’s streak was at 24 games!

Here are those lists for consecutive game scores less than 5, regardless of minutes played.
08-09 through 13-14:

(Look whose name are on there! Former Cavs Sasha Pavlovic and Diop; fellow recent lottery draft busts Jan Vesely and Austin Rivers…) Not far down is another bust, Austin Rivers…)

02-03 to 07-08:


96-97 to 01-02

90-91 TO 95-96 http://www.basketball-reference.com/play-index/pstreak.cgi?request=1&player_id=&year_min=1991&year_max=1996&team_id=&opp_id=&is_playoffs=N&game_location=&is_starter=&c1stat=&c1comp=lt&c1val=&c2stat=game_score&c2comp=le&c2val=5&c3stat=&c3comp=ge&c3val=&c4stat=&c4comp=ge&c4val=

Out of these lists of hundreds of players who have played as lousy for such a sustained stretch, does anyone respectable appear? Do you even recognize any of those names?! (Yes I do, and that’s not a compliment to me..)
Would you want any of those players on your team or heck, even as a starter who would also receive minutes in crunch time and the fourth quarter?

What respectable players share these honors with AB? Mike Bibby, Ben Wallace (who, I’d argue, was the best defensive player in the 2000s, is an anomaly since his strength was so much on defense and took very few shots, leading to a low game score), Lou Williams, Rashard Lewis, Andrew Bynum, and Elden Campbell, are some notable names out of the hundreds who appeared on those lists. There’s a few more (Shawn Kemp) whose names appear AFTER they hit their prime and were then scrubby role players.

What lottery picks are on this list? Anyone who was deserved to be taken as a #1 pick?!

Fact is, only a very small percentage of players who were at least as good as a respectable starter have ever played that poorly over such a sustained stretch at any point in their career.

Despite such a small sample size, AB has already committed such a sustained stretch of mediocrity that very few NBA players have experienced regardless of the success of their career.

His probability to be a future star or even a viable starter or key rotation player in the NBA is dwindling by the day.

Follow along as I create a map of where I’ve been in 2013

(Follow along in my process of creating a simple heat map – from initial idea, to brainstorming, trial and error, coding, designing, and more trial and error in between)

Before I start:
goal: create a heat map that displays all of my traces of where I was in 2013.

what’s the map’s purpose: – create something purdy and find out where I have been the most in 2013.

Context: In 2013, I had taken 140 gps traces* using osmtracker, my favorite gps logging software, on the android. Nearly all traceswere taken while I was driving or riding my bicyle.
(This does not include traces that I had taken in Haiti in May-June while with the Humanitarian OpenStreetMap Team.

Here’s some brainstorming:
- what tool to use? Most of my projects and cartographic exploration have been with Tilemill.
In this case, using tilemill isn’t a smart move here, because I had hundreds of layers that i’d have to manually add.

I thought of manually combining all of the traces together – which I did using this simple bash script and a
gpx file manipulator called gpsbabel:

gpsbabel -i gpx $(echo $* | for GPX; do echo " -f $GPX "; done) \
-o gpx -F appended.gpx

I added the merged gpx file as a new layer to a tilemill project and received an error:
“This datasource has multiple layers:
‘waypoints’,’routes’,’tracks’,’route_points’,’track_points’
(pass layer= to the Advanced input to pick one)”

Instead of looking for a workaround for the error, I thought about, hey, let’s give cartodb a shot!
I have been looking for an excuse to use cartoDB for ages. I had played around with it minimally over the past year and a half, but my comfortability with Tilemill and the fact that I simply had not had a project where I thought it would be a
really appropriate tool for the job, hadn’t come up.

Let’s give cartodb a shot now…
uh-oh :( )
appended.gpx is too large (it’s 5.6mb). You can import files up to 4.53 MB.

Perhaps I should simplify the GPX traces I thought? I also thought, let’s just convert it to geoJSON and see if geoJSON would resize it’s size… so using ogr, I thought to do:

ogr2ogr -f "GeoJSON" traces.json appended.gpx
ERROR 6: "GeoJSON driver doesn't support creating more than one layer"
ERROR 1: Terminating translation prematurely after failed
translation of layer routes (use -skipfailures to skip errors)

What?! I thought “what do you mean by multiple layers?”
- Does ‘layers’ in the error message mean geometry types (points, linestrings) ? or multiple geometry collections? Some extensive google searching just led to the gdal source code which didn’t give me any more clues..

I walked away for an hour or 2, then thought something might have been wrong in my gpx file.
I opened the gpx file in qgis, and saw, hey it asked me which layer to open of the file and it referred to:

Selection_004

The layers created from my GPX file consisted of 5 layers, including points where I made voice recordings, text notes, or had taken a picture. The layer that I want – the linestrings that I recorded, ended up being the Tracks layer. (The other layers: routes was blank; track_points were the points that make up my linestring; waypoints were points where I had taken a voice recording, text notes, or a picture)

My traces and points in qgis
Selection_024

So, I thought I could just specify the layer within my geojson with the following
ogr2ogr -clipsrclayer tracks -f “GeoJSON” tracksonly.json appended.gpx

I thought this would select the layer that I’m exporting to, but it does not.

Alas, that didn’t work, I received an error as mentioned earlier in the post.

So, fellow readers:

Any suggestions for tools to use or workflow for this?

(Updates to this post will be made as I progress along)

What I plan to do :

- Some more google and gis.stackexchange searching for some inspiration.
- save the gpx file (selecting only the tracks layer) as JSON in qgis and open in
cartodb and/or Tilemill.


I also wonder:
Simply convert each of them individually to geojson files, and then add all of them as a layer in a leaflet instance?

Update 1, 2014/01/02, 7pm:
- I converted the tracks layer in my gpx file to JSON (only 1.3mb!) and created a cartodb project for it . Success!
- I’ve also created a tilemill project and loaded my JSON in there. (I’ll post links soon)

I thought of a new idea: increasing or changing a section of the line based on its proximity to another line…
For example, if there’s another line within 60 meters of it at a specific point, increase the intensity of color of the line for a few meters…

This cannot be done out of the box in tilemill or cartodb… but I have a hunch that this sort of calculation could be done in postgis… I would then reimport this data into cartodb and style there.

HMM, this also looks interesting – http://developers.cartodb.com/tutorials/intensity_map.html

Some thoughts on NACIS, 2013.

I went to NACIS, the annual conference for north american cartographic information society.
This was my 2nd NACIS, my first was 2012 in portland.

This experience was much more positive and enriching for a few reasons, mostly personal:

- Looking back, I was intimidated last year. At the time, I was unemployed, struggling freelancer at my first cartography conference and what was my third professional conference. There were many of my mapping heroes there, all of whom, I hadn’t met before. Since last year, I’ve learned a lot more and felt more comfortable of what was being discussed. I recall last year there were a couple talks where I felt completely lost in what was being presented (bivariate maps, for example). No instances of that this year :)
The crowd was a bit younger – meaning there were people in similar situations as myself or similar aged. Last year, I felt pretty young, outside of the college students.
Additionally, there were several people from the twitter and web sphere that I recognized (many more from last year), or had met online, or knew of their work – and had the chance to see them in person or catch up with them (Alan M, andy woodward, mele, mike foster, aj, ian, and dane from mapbox, matt mckenna, mamata akella).
All friendly people. I even drank and played pool with them. Interestingly, those who tweeted more often also tended to be more extroverted.
Plus, it helps when your local mentor also attends the conference this year:)

- Presentations were more relevant to my background: I wanted to see practical things that I could implement in my work (of web maps), see things that push the boundaries of what a typical map is – particularly on the web, and what can be mapped in new ways (to wide audiences) and how. I’m not generally a theory person. My entire experience with using ESRI products is about 30 minutes. Last year, there was much more of an academic slant with a focus on theoretical talk and things on historical mapping.
This year, I’m really glad that I shelled out the extra $90 or 100 for Practical Cartography Day. It was quite practical – almost all on web maps of some sort and was in fact likely the best day of the conference. Even serendipitous conversations became really relevant: a discussion over lunch evolved to neighborhood boundaries (one theoretical topic that I find really intriguing) was great.

- Even outside of PCD, this year had much more focus on web maps and using open-source tools that I’ve been quite comfortable with or am anxious to use:
Tilemill, leaflet, D3, qgis. Someone (I forget who) mentioned that, this year, established, older cartographers are finally accepting that the internet (specific through web maps not just a static image) is the primary medium for contemporary cartography and it’s here to stay, and/or they’re even beginning to use some of these tools.

- Wednesday night featured a map gallery, 30-50 printed maps done by students, professionals, and anyone in between, on display. While it was great to see, I would love to see the map gallery to include online web maps, even if there’s dozens of computers set up to see them.

- My only other criticism: presenters, although it’s not the same as attending the talks, slides can be at least of some use and helpful to those who didn’t attend. Share them ! You just need to add a link to your talk on lanyard!

http://lanyrd.com/2013/nacis/schedule/

- I presented on the Humanitarian and lower income countries’ web map , designed by the Humanitarian OpenStreetMap Team (Presentation available at: http://skorasaurus.github.io/hotatnacis2013/. It went well although it was very difficult to see the contrasting colors on the projected screen; you couldn’t tell whether a road was yellow or beige, or see a streets outline very well.

In retrospect, I should have tried a dry run with the projector and maybe I could have made simple modifications (like dimming the lights). More interestingly, I felt like that I belonged more since I presented or at least that I had something to offer this time. Last year was definitely a feeling of imposters’ syndrome. Discussing someone’s presentation is also a great conversation starter.

- Despite that at least 15% of talks were rescheduled or cancelled, the organizers did a great job of making things run smoothly as possible. Without the last second adjustments, this could have been a trainwreck.

- Notable absences: - Code for America had a much smaller presence than last year although that could be because its annual conference was the following week. Additionally, CfA could have had less map-related projects this year. – The Feds. :( Several presenters were employees of the US government and as a result, couldn’t present because of the shutdown. laaame! Missed out on a few talks as a result (3 of the 4 in the same session as my presentation were cancelled!) Kudos to those who filled in at the last minute.

- An interesting point Eric Thiesse, Matt Mckenna, and I discussed at the Friday banquet dinner, the last night of the conference: We all do web mapping, marvelling how there’s so much change in the geospatial world even as we are trying to keep abreast of new technologies, tools, mapping libraries: How do you keep up and sharp on them ? Then I remembered what Tom McWright once tweeted months ago regarding this: you don’t. I’ve had this in the back of my mind since then. It’s impossible to keep up. You pick your battles. Now that my time to work on geospatial projects and mapping is greatly reduced (Thanks day job!). I’m picking my battles, being a little more discerning on what to learn or what projects to spend time on. Slowly coming to terms that I will fall behind on some things.

Counting the Use of tags in an osm2pgsql database.

Earlier this week, I was talking with a friend who just moved from Massachusetts to Cleveland, they were a bit surprised about the use of middle school and junior high were both used to describe schools consisting of Grades 6-8.

I was curious about this myself and wondered which was used more often in Ohio and this general outline that I use for counting the use of middle school and junior high in ohio can be used applied if you want to see which tag is used more often in a specific state, country, or other place.

Of course, there’s multiple ways to do this and I wish there an easier way but here’s what a I did….

Because I was comparing 2 tags within an entire state, I couldn’t use the USA implementation of taginfo which uses the entire USA, and I couldn’t open an OSM file containing the entire state of ohio in an osm editor like josm..

So, for my state of ohio, I figured this answer out by:

1. downloading an extract of my state from geofabrik.

  1. Create the postgis database (I have ubuntu 12.04, postgis 2.0, postgresql 9.1, I assume you already have this installed; commands are different if you’re using postgis 1.5… I should explain this step out more clearly in a different post since I never found good introductory documentation for postgis/postgresql/osm when I first starting learning this back in ’11.)

2a. createdb nameofyourdatabase
2.b psql -d nameofyourdatabase -c “CREATE EXTENSION postgis;”

 

3. Fill your database with the OSM data through the osm2pgsql software by the following command: osm2pgsql -s -d nameofyourdb ~/path/to/data.osm.pbf

  1. Using pgadmin, with the gui, I connected to my database and I clicked on magnifying glass with SQL within it and entered the following SQL. input the following SQL:

SELECT name from planet_osm_polygon WHERE lower(name) ~ ‘junior high’ UNION ALL select name from planet_osm_point WHERE lower(name) ~ ‘junior high’ ORDER BY name

(thanks to Paul Norman to assist me with the proper SQL query syntax).

SELECT name from planet_osm_polygon WHERE lower(name) ~ ‘junior high’
this means is that I selected all closed ways that has the name ‘junior high’ in it (case insensitive).

Now here’s what the finer points of what syntax means:
* name – This is the column ‘name’. Now, osm2pgsql creates columns based on the first half, also called the ‘key’, of an osm tag. Osm tags are written out as key=value

Other columns generated by your osm2pgsql include highway, amenity, leisure, and many more. Because it’s highly unlikely that there’s a store named junior high and the sake of simplicity, I didn’t need to specify that a tag must have amenity=school and have junior high in its name.

  • planet_osm_polygon – This is the name of a table in osm2pgsql that contains all closed ways. Here’s the names of the other tables in osm2pgsql.
  • WHERE – specifies the condition in which I want to select the name table. If I wanted to query a simple tag that has a standard key and value, like amenity=parking ; I could simplify do WHERE amenity IN (‘parking’). But since ‘junior high’ occurs in the middle of a text phrase, the tilde (known as ~) will search for the pattern ‘junior high’ within the tag value.

So, this will return results for: name=Mooney Junior High School; name=Junior High School; name=wilkens junior high ; regardless of case sensitivity. (I admit I don’t fully understand this aspect of the syntax, so someone clarify if I’m incorrect !)

  • UNION ALL – this allows you to do multiple queries within one and include the results of both queries at once .

select name from planet_osm_point WHERE lower(name) ~ ‘junior high’ ORDER BY name

Because OSM objects can be tagged as either nodes or as ways, I need to also search for any nodes that have junior high in its name ! The name of the nodes’ table is planet_osm_point and the structure of the syntax is nearly the same.

  • ORDER BY – this is simple, it merely sorts the results by a column. In this instance, I want to sort them in alphabetical order, so I did ORDER BY name.

Now, we can execute our query by clicking “Execute query” (its icon looks like a play button on a DVD/VCR),

Now in pgadmin’s lower-right hand corner, will be the number of results that are returned and the names of all of the schools with junior high in it…

So, we see: 219 of Junior High in Ohio ! There’s a few duplicate ones, which is interesting. Some may be the same name in 2 different places, some may be duplicate nodes of the same school.

And we repeat the process again for middle school, and… 389 !

Middle school is used more often in Ohio than in Junior High… :)

As you continue using osm2pgsql and working with OSM data, you’ll realize that if you are interested in generated statistics of tags or creating maps using postgresql in OSM data, most of your interaction with postgresql will be creating queries and using SELECT statements. You’ll want to guide your learning on that.

Some updates

Some assorted thoughts, happenings of the past couple weeks/months:

(Originally written on April 26, 2013 – forgot why this never posted..)

…..I’m stoked to announce that I’ll be joining HOT (Humanitarian OpenStreetMap Team) in Cap Haiten, from May-June. I’ll be with some immensely talented and great folks. We’ll be doing great work down there and working with local Haitian mappers. …..Alas, I’ll miss FOSS4G-NA, SOTM-US, mais, c’est la vie.

Coming to the realization that there’s simply so much learn and with increasing specialization, I should focus my learning efforts more on web mapping instead of print mapping.

note to self, I’ll be updating the site. Finally get a domain name name and strongly considering to move the site to jekyll.

………………

Also, looking over the popular apps on the droid market and seeing most of the apps on the top lists on the market were games, made me reflect: what has the droid really enabled us to do ?

What made me think is that there aren’t many “killer apps” but rather, general hardware improvements like:

- being able to check your email, communicate online, 24/7 thanks to mobile data plans and increased presence of wifi in public spaces.

- Gps

- a more improved camera that allows higher resolution, image stabilization, etc.

The lack of killer apps is also an indication that software being less app based and more web-based with Software as services and social networks that are OS independent like Dropbox, twitter, facebook.

Now, granted there’s a few things that are close to ‘killer apps’ for me.

- finding local reviews for restaurants when you’re out of town… (yelp) and soundhound and shazam. Thinking of the capability of a software on your phone that could identify a song by just playing a 10 second clip of it was a pipe dream just 10 years ago…

……………………

Mapping Cleveland’s Proposed Ward Boundaries of 2014

Monday March 25, 2013

Cleveland City Council President Martin Sweeney released the proposed ward boundaries for 2014. This is just one day before he presents them to be voted on in City Council.

City of cleveland issues this map. A JPEG. Not even georeferenced.
- it has no street names, all features (including rivers and railroads) are all styled the same. Nothing more.

This has sadly been characteristic of the City of Cleveland’s approach to open data, particularly spatial data…

Cleveland’s approach to open data, particularly in this instance, isn’t acceptable. Nor does it help foster a culture where civic hacking flourishes.

Great maps and other visualizations including this great slippy map of new Districts of NYC by WNYC (led by jkeefe) that inspired me to do this, shouldn’t be exclusive to the tech cultures that we usually hear (NYC, SF, CHI, Austin, SEA, PDX, on and on) or wherever Code for America stops in for the year.

Later that afternoon, Current Ward 14 Councilman Brian Cummins, had received PDF maps of most of the proposed ward and posted them on his blog.
A step above from what I had before. A Shapefile would be too much to ask.

(At least Kudos to him and my current councilman, Joe cimperman, for having twitter accounts and responding to their constituents on there. )

So I began by opening up a blank layer in JOSM, loaded in the Cleveland boundary from OpenStreetMap (less things to draw that way) and began simply tracing out the boundaries over Openstreetmap tiles. I was switching windows every couple minutes, looking at the JPEG boundary, then drawing the same lines in JOSM, repeat.

Had to be a better way, was going to take a couple hours (and it did).

(Side Question: What You use to draw geometries that you’ll later process in your maps/visualizations/analysis ? )

Behold: The georeferencing tool in qgis, which would let you load an image as a layer. This, I thought would be a shortcut. I could create the polygons of the wards by tracing right over the boundaries in the image, without hauving to switch windows.

The biggest problem was that I didn’t know the projection of the JPEG.
Unfortunately, these tutorials assume that your image is georeferenced.

To georeference in qgis, you should know what the projection of your original image is in before you start. If you don’t know, you’ll have to do some guessing and trial and error. I made a few guesses of the most popular projections (4326, 3857) and then tried several ohio ones. An hour or 2 later, none of the projections worked out.

So, I scrapped that idea, I began to draw the ways again in josm.
From there, I did my usual workflow which with I’m most comfortable into tilemill;

use osm2pgsql to convert my .osm file of boundaries (which were in the form of relations, specifically multipolygons) to load into a postgis enabled database.

(I’ve been meaning to become more comfortable with geoJSON and I would have tried to save my file in josm as json but I read there’s a bug in the json export of josm that doesn’t export relations correcty, I didn’t bother to verify this yet)

Next, I used Mapbox’s (which consists of data from OpenStreetmap) technique to create a custom map that I can use as my reference base layer.

Style my layer of proposed ward boundaries in Tilemill….

Then in mapbox.js, I simply put the two layers together. voila, as shown in my map at:

http://maps.jhfeichtnerfund.com/wards/index.html

Still more to do with this:

- finish up the documentation

- Tweak the colors
- Add in the census tracts so when a user hovers over an area, they can see the population of a particular place.
- add the existing boundaries too would be nice (as a separate layer for the online map)
- convert the 2014 ward boundaries (currently as a .osm) to a shapefile so others can use it.
Right now, it is available as an .osm in my github repo…

Follow this along in its github repo. https://github.com/skorasaurus/cleboundaries/

Life over the past 2 months

One of the big takeaways that I had from attending NACIS 2012 in October was to focus my efforts more on specific projects to more clearly demonstrate my mapping skills. So, I’ve taken
taking 2
Since December, I’ve spent most of my time on one of them, Cleveland Photography Society’s Scavenger Hunt Map

I realized that I wanted to take my maps to the next step, to offer interactivity, multi-zoom levels, I needed to learn how to code javascript (the language that mapping libraries like leaflet and mapbox.js are written).

It had taken me longer than I had planned to learn enough to reach my objectives for the Cleveland Photography Society’s Scavenger Hunt Map – http://maps.jhfeichtnerfund.com/
offering interaction (users seeing the picture when their mouse cursor is over the marking, a nice zooming action when someone clicks on the list of markers).

That said, I spent parts of December, January, and February, crashing head first into Javascript.

Initial, very brief thoughts on JS:
That there’s no standard for writing API documentation for javascript blew my mind.
As someone who’s learning, this can be frustrating at times.
However, I can understand (and don’t agree) the developer’s prospective: code becomes obsolete relatively quickly (within years), decreasing the incentive to write good documentation. Good documentation also means more than just writing enough notes for yourself so you could understand it months later. I am guilty of this myself sometimes for my projects.

- Best books so far to learn: Douglas Crockford’s “javascript, the good parts” and Marijn Haverbeke – http://eloquentjavascript.net

Also, mozilla’s js reference. https://developer.mozilla.org/en-US/docs/JavaScript/Reference

Most other online resources have been meh.

January, Cleveland has its first Open Geo meeting. Steve Mather and I organized it and it was a blast. I wrote a run down of it on my OpenStreetMap blog. http://www.openstreetmap.org/user/skorasaurus/diary/18494

In a nutshell, Cleveland was missing a place, really, a culture or ecosystem where people could informally talk about OpenStreetMap, open-source geospatial software, and other GIS-related projects [cool maps, data visualizations, analysis, etc] that they are working on. As the culture would develop, people would develop friendships, increase the size of their network, hare tips on using software, discuss geospatial news, share pointers on code, give career advice or share job leads, maybe even off or start ventures or even informal projects together. etc, all things that were missing or didn’t exist as much as they already did.

This culture isn’t going to develop overnight. But it’s starting. I’m excited to what will come out of it.

(Note to self, I do this more often, more and more things are coming into my head of things/projects that have been going on. )

How I tried to overlay a mask over multiple layers except for a specific region in Tilemill (and left with a buffalo tint)(and I liked it)

One thing about designing maps, you want to draw people to what they see (yes, ultimately the viewer will have their own subjective interpretation, but I digress…).
So, when designing a map to highlight the locations of 125 items that were to be photographed in the 2012 CPS Cleveland Photo Scavenger hunt, I wanted to focus on the contest area while giving viewers (likely participants who may have forgotten the exact boundaries or may not have done the scavenger hunt but would be at least familiar with downtown Cleveland) a little context of the surroundings.

How to accomplish this ? Well, I’m guessing it can be accomplished through a couple different ways: one, by, fading the surrounding features (this known as the buffalo tint) as shown in this demo by Bill Morris, wboykinm (he does inspiring works using Tilemill, pushes it to its capabilities, and blogs about it who does this buffalo tint over a separate base map, or by laying a gray mask over the surrounding area outside the border, on the same map.

Either of these techniques – Decreasing the opacity of the outside area or giving it a gray mask to make it a bit harder to read, drawing the viewer to the easier part to read and having the non-grayed part pop-out.

So, I started experimented in my project (based off originally osm-bright written by mapbox)

with map {
background-color: #999;
}

which would provide the gray mask, and one of the polygon comp-op function that allows you to do different effects on features in tilemill (using the gray mask)

#border {
opacity: 0.5;
polygon-comp-op: hard-light;
line-width: 4;
line-opacity: .65;
}

Unfortunately, all of the polygon-comp-op options that I tried including dst-in which had no effect or colored inside of the border. I uad only been able to lay the mask over the rest of the features inside of my polygon, (labels, roads, water, land), but not outside – what I’m intending to do.

So, hours later, I stepped away and thought how else to tackle this,
I came back, noticed, silly me, that

map {
background-color @water;
}

was called again in base.mss to provide Lake Erie with its blue sheen. I figured that might have something to do with it but that was a red harring. I eliminated that line and it left me with a gray lake.

Meanwhile, I hadn’t figured out yet or found anyone else who had done anything like this – overlay a mask over multiple layers except for a specific region in Tilemill.

(written on Nov. 28, 2012)
Flash forward 24 hours. This is why I love Tilemill. Yes, it’s free, open-source, supports linux (as well as win and osx). What you see is what you get – allowing you to code on one side while showing what your map looks like on the other. I’ve been a fan for a while, and I’m finally starting to make progress learning its intracies and operations 15 months later as its capabilities increase. It’s getting kudos by cartographers (Dane Springmeyer, lead developer of Tilemill, had the most attended presentation, by far at this year’s NACIS, the biggest annual mapping conference in North America).

I was a bit frustrated, and after a day of trying to understand the comp-op and wondering why all of the comp-op operations that I was trying had no visible effect or achieved the opposite of what I was intending to do, coloring inside the polygon, as shown above.

Only a few hours after I posted my query in TM support, Dane clarified some new tools that became available in mapnik 2.1 (just released 2 months ago) and how to go about doing it in Tilemill.

As of now, my code ended up as:
#border {
::outline {
line-color: #999;
line-width: 4;
line-opacity: .47;
line-join: round;
line-comp-op:multiply;
}
line-opacity: .95;
polygon-opacity:1;
opacity:.83;
image-filters:agg-stack-blur(10,10);
comp-op:dst-atop;
}

(for future reference, note: In this code, as opacity gets closer to zero, you are able to see more and more of the area outside of the border.

Resulting in:

As of now, I didn’t use the gray mask that I first intended to do (nor have I figured it out yet) but I’m really liking the results and I’m encouraged by my progress, to be able to do something that I hadn’t before.

Work on this isn’t done (I’d like to customize the colors a little more, maybe add interactivity) and you can follow the progress at its github repo and view the map.

Should I be a super user in postgis ? And other beginner’s questions about roles and users in postgis

I have a confession to make. Some reasons that I don’t post on here because:
a] I don’t want to appear too ignorant to others. Typing this out makes me realize the above thought is pretty sophomoric. We all have to start somewhere when we learn things including postgis, programming, cartography, and other things in the geospatial world.
b] Like others, I don’t always want to go back to my notes to write concise documentation. I just write enough to make sense for myself later on. Other tech writers do the same thing when they write documentation… leading to point C [SIDEBAR: I keep a textfile of every each problem, install issue, question that I have, named log-nameofprogram].
c] Most documentation for postgis and I’d extend this to linux, is written so that the reader is assumed to know a lot. This is a pet peeve of mine, and by not doing this myself, I’m a hypocrite. Below is an example…

Here is my journey along the way of learning all of this…. With that said…

What’s the difference between a “super user” in postgres and another user ? Should I use the ‘postgres’ user in my day to day postgis work ? Should my created user be a super user ?

All of my mapping work is locally done on this computer. I’m the only user. Occasionally, I’ll make dumps of my db’s and then store them in other places.

Until now, I had been just using postgres (the default user name, after you install postgres) as my user name. I knew that you could create a user name, but wasn’t sure whether I should or not in my case. What was the difference anyways ?

As a result, whenever I converting data into postgis databases, I would have to copy these data files (usually from OSM) into /tmp because postgres could access files from my /home . I was finding it to be a bit inefficient, and I was prompted to determine whether or not I couldn’t access my python virtual environment (where imposm is installed) while logged in as postgres..

So, as I was looking to create a new user, I was wondering whether or no to be a super user’

The Postgresql documentation was a bit helpful.
Stating:
“Only roles that have the LOGIN attribute can be used as the initial role name for a database connection. A role with the LOGIN attribute can be considered the same as a “database user”. To create a role with login privilege” use either:

CREATE ROLE name LOGIN;
CREATE USER name;

What does LOGIN priviledge mean exactly ? Oh, well. A couple paragraphs down, they mention that I can issue CREATE ROLE name CREATEDB.

postgresql up and running (the book) explained it differently, telling that you can create the users in pgadmin or by the command line and highly recommend to create a new user after ASAP after installation, by doing:

CREATE ROLE leo LOGIN PASSWORD 'lion!king'
CREATEDB VALID UNTIL 'infinity';

It raises a good point, do I want a password with my postgres user account ?
“If you wanted to create a user with super rights, meaning they can cause major
destruction to your database cluster and can create what we call untrusted language functions”
Not sure what untrusted language functions are, but I’ll trust them and create the user. ‘

Well, after running, logged in as postgres (you can log into postgres through different ways, including sudo -u postgres -i) launching psql in the terminal, I typed in:
CREATE USER skors
(skors – my user name, also the user name for my desktop as well. I forgot the ; but it’s no big deal because I was able to create a database simply by doing: createdb nameofdb)

So, I go to add postgis capabilities now to my newly created database using my user name by doing

psql -d cleve -c "CREATE EXTENSION postgis;"
ERROR: permission denied to create extension "postgis"

argh ! Why shouldn’t I be a super user if I have to be a super user to create a new database ?

Which leads me full circle back to my questions that I asked in the beginning still unanswered:

What’s the difference between a “super user” in postgres and another user ? Should I use the ‘postgres’ user in my day to day postgis work ? Should my created user be a super user ? Do I want a password with my postgres user account ?

Cleaning up .GPX files for OpenStreetMap and Visualizations.

While going through the rest of my GPS traces from the HOT exploratory mission in Senegal [some write-ups on the HOT blog], I had found one GPS trace that has 2 problems that I’ve occassionally experienced while collecting gps traces for OpenStreetMap.

- The GPS signal is temporarily lost and the distance between 2 points of your trace is abnormally lengthy,
and cuts across roads and features.

An example: Did I time-travel across that area ? Nope. Did I time-travel across that area ? Nope.

- And what I call the ‘scribble effect’ (is there another name for it?), where I accidentally forget to turn off the gps after I arrive at my destination and end up with multiple points near each other, looking like a child scribbled in a coloring book.
An example:

If you’re planning to make any visualizations with your traces, these two symptoms allow viewers to misinterpret where you’ve been and make your visualizations inaccurate.

The scribble effect also negatively affects OpenStreetMap because GPS traces that are uploaded in the future in the same area as your scribbled GPS trace, other users will be able to not be discern the newer gps trace from your scribbled gps trace. There’s also still a few places on earth (including some regions of Senegal) that don’t have any satellite imagery to (legally) use for drawing ways for OSM.

So, after you complete a trace, how do you eliminate these symptoms described above?

I’ve tried a combination of different filters with gpsbabel – (explained more here)

but have yet to find a single command that would solve both issues.

gpsbabel -t -i gpx -f gpx_louga_03062012-withmay.gpx -x track,pack,sdistance=0.6k -o gpx -F test7.gpx
- this had fixed the temporary signal loss … but it didn’t fix the scribble effect.

gpsbabel -t -i gpx -f gpx_louga_03062012-withway.gpx -x discard,fixnone,fixunknown -o gpx -F test.gpx
(resulted in a blank GPX File.

gpsbabel -t -i gpx -f gpx_louga_03062012-withway.gpx -x discard,fixnone -o gpx -F test.gpx
(no effect)

gpsbabel -t -i gpx -f gpx_louga_03062012-withway.gpx -x position,distance=3m -o gpx -F test3.gpx
(this removed the long points between the 2 segments, but didn’t solve the scribble effect)

gpsbabel -t -i gpx -f gpx_louga_03062012-withway.gpx -x discard,hdop=5,hdop -o gpx -F test6.gpx
(no effect)

gpsbabel -t -i gpx -f gpx_louga_03062012-withway.gpx -x position,distance=10m -o gpx -F test8.gpx
- removed 1100 of the trace’s 2400 points and the , but still left me with the scribble effect.

I’ll continue to explore cleaning up the GPS traces with GPSbabel filters and hope to find a way but if you know of a specific filter in gpsbabel or any other way to do this, besides manually deleting the offending points in gpsprune, I’d appreciate it.

Update: Dec. 1, 2012. I also shared this in an OpenStreetMap Diary Entry in Sept. Other OSM users have also experienced this problem and hadn’t found a solution besides what I’ve described above. So, I’ve found that the test3 example above and manually removing the points in gpsprune (from the scribble effect) is the easiest way to clean up your GPS points.

Follow

Get every new post delivered to your Inbox.