Ecosystem Dev View

  • New task added, Ecosystem Dev View. Check it out here:

    This is something I'm currently focusing on for a couple reasons:

    • It will give us much needed power in understanding what's going on in the ecosystem at both a micro and macro level, and will speed up ecosystem development/balancing considerably.
    • It creates an API for ecosystem viewing via webtools that other people can use to make interesting views of the Eco world.

    Interested if anyone has opinions on the open questions there, namely:

    • Is there a technology to easily expose APIs written in C# to a web interface like this, with documentation, javascript intellisense, all those niceities?
    • Is there an easy way to expose parameters from C# to a browser interface? Such that they can be changed browser side.

    This is the first example of how we want to develop features with community support, and once we have the code shared people will be able to get a detailed view, and contribute if they like as well.

  • <blockquote>- Is there a technology to easily expose APIs written in C# to a web interface like this, with documentation, javascript intellisense, all those niceities?</blockquote>

    Might be a bit overkill, but ASP.NET's WebAPI could work. You can self-host it, so that the whole web server is ran by the application itself. I would recommend the current stable version, i.e. not the vNext one. vNext has many interesting features, but is still in a horrible Alpha state.


    • It can directly deal with POCOs (plain old CLR objects) for both input and output. It will handle serialization/deserialization of the object.
    • Lots of additional Nugets for all kind of things, including authorization and authentication.
    • Multiple formatters for data already available (JSON, XML, ...) allowing you to query the API in whatever way you feel most comfortable with.
    • It's possible to generate documentation on the fly using reflection. There's already some stuff out there, but I haven't used it yet with self host.
    • Owin (the self host part) is a pipeline, which means that giving access to it allows mods more refined request processing if required.
    • No external web server required, platform independent
    • Quite established, there's lots of resources about it, including communication with common frameworks (like jQuery, AngularJS, SignalR, ...)


    • You would need to write controllers to wire the whole thing up, i.e. there's no automatic exposal of functions or members. Not necessarily bad in my opinion as otherwise it would be too much magic. An example controller would look like <a href="">this Gist</a>. I'm sure it doesn't make a lot of sense, but I had to come up with something :D
    • Although not required, it makes sense to use DTOs in the WAPI rather than the real objects. That way, finer control over what properties can be changed in what way is possible; it's also making the WAPI more stable for changes in the engine API. However, it creates another layer between the web and the core API, so that might count as "not easy".
    • Purely C#/server sided, so there's no JavaScript intellisense possible. Although using reflection, it might be possible to auto-generate that kind of stuff?

    Disclaimer: I haven't used WebAPI in a self host environment yet, but in a quick test right now it seems to behave the same way like it would be hosted on IIS. It doesn't exactly do a lot of magic in terms of you have to wire stuff together yourself, but I personally think that if you have auto-bindings for such things, it usually ends badly (as in, there are errors in certain circumstances that make no sense for the end user, classes have to be written in a certain way...)

  • Would making the ecosystem completely visible from a web browser, allow a player to find out important resources relative to him making the game easy.

  • I think "dev view" kinda implies that it wouldn't be accessible for normal players/during normal gameplay.

  • There should also be a dev server then for developing, where you can spawn in animas and such to see how your world interacts with your mod

  • Well, if I read it right, @JohnK talks about a bunch of development tools for exactly that. Something that can be enabled/disabled that gives you more information than the game (for debugging purposes), as well as allowing manipulation. It boils down to displaying internal data plus a bunch of dev (or "cheat" if you wish to call them that) tools.

  • i think .net web api is a good thing to use ... and if i am not wrong .net sites also run on linux with an addon to apache or what other web server you use. its also for most people that have more advanced skills in coding easier to do since .net can be used with several other language so people are not locket to one particular language. and visual studio community edition is free to use ... there is so much power behind .net and so powerful tools like visual studio that i think exposing server views via web api would be a very good solution. web api also lets you use Java script ( not that nasty shitty java ) so you can build full webside and have the web api behind it to give people access to implement stuff in there own website .. the possible thinks you can do are almost endless ... i am my self working on web api atm to create a nice tool to control my gaming servers with. thats my point of view =) tho i am also know to be a Microsoft fanboy almost so checking other stuff out could be good to maybe =P

  • I'm also for using web api. Writing a minecraft control panel with a service that would poll the webapi and update information accordingly. The web site would also communicate to the service(s) by way of the web api to perform various actions. This allows for the service to run anywhere, so long as it can communicate to the web api for its jobs.

  • Thanks all, and great summary RepeatPan, thats really awesome. Going to look into those techs you mentioned a bit more. And thanks for the sample code! Really want to get the source opened up so you guys/girls can dive right in.

  • @JohnK when you go open source, would it be to the public or just the $175 > Kickstarter backers.

  • What kind of support for WebAPI is there on other platforms? (Mac/Linux) - Is there mono support?

  • Not entirely sure, but from what I can gather is that WebAPI is supported by mono to run it on linux. Might need some extra steps to get it playing nice though, but I guess only time will tell when it can be tested.

  • @iramos15 it's limited to just the 175$ backers.

  • I've tried to write a simple project and run it under Mono in Windows, but result is really ugly.

    • It seems to eat the "HTT" at the very beginning of the response. Therefore, all HTTP headers aren't parsed. I have no idea why it does that.
    • It doesn't seem to work with Microsoft.Owin.StaticFiles. I get an <code>System.IOException: Invalid parameter</code> in <code>System.IO.FileStream.ReadData (System.Runtime.InteropServices.SafeHandle safeHandle, System.Byte[] buf, Int32 offset, Int32 count) [0x00000]</code>.
    • Requests to the Web API part (api/test) take at least 15 seconds for each request. Again, I have no idea why. With .NET, even when debugging in Visual Studio, this takes at most 200ms the first time, 10ms every time after.

    I've uploaded <a href="">all binaries to my dropbox</a>. It's creating a self-hosted server listening on http://localhost:4444. You can access the API using http://localhost:4444/api/test. /api/test/add/something will add "something" to the list, /api/test/remove/something will remove "something" from the list. There's two static files, /Index.html and /LoremIpsum.txt that should be displayed if requested. 500 and 404 error handlers should be implemented.

    If anyone who has experience with Mono could take a look at it, that would be great. I really would hate to drop Web API just because Mono is stuck in the last decade again.

  • I have played around a little bit more, especially with different technologies. Here's my (likely final) report for this week:

    • <b>Mono's behaviour remains undecipherable.</b> I could prove that the pipeline (i.e. all code that somehow I am responsible for) is left immediately. It seems that for some reason, Mono struggles with sending this data to the client... Removing "HTT" and sometimes (not always, but more often than not) taking unreasonably long.
    • Choosing the right technology will likely end in a battle between Mono and whatever is comfortable. Because Mono, sadly, is still stuck in .NET 3.5, most innovations of the recent five or so years are not completely usable, or unusable altogether.

    Let's assume now that we don't care for Mono right now, or are going to fix these issues later on, or switch to DNX Core anyway. The technologies I would recommend are the following:

    • <b>OWIN/Katana</b> for the whole server stack. It allows for plugging in custom middleware to deal with all sorts of things. SignalR and Web API are simply plugged in as middleware into this, but there are many more that are available as either nuget or can be easily written. These include authentication and authorization (e.g. have only certain people have access to certain APIs/pages), CORS (required to have APIs available on other domains) or custom HTTP handlers.
    • <b>Web API 2</b> for "normal" API requests. As mentioned above, Web API 2 is easy to add to a project, offers functionality that is covering most people's needs and can be easily expanded if necessary.
    • <b><a href="">SignalR</a></b> for more sophisticated APIs. Put bluntly, SignalR offers a bidirectional RPC between server and client using various technologies (WebSocket, Server Sent Events, Long Polling) so that it should work with most browsers. Especially for transferring lots of data, or having extended server/client communication, I would recommend it. In this case, transferring all the map data to the client as well as stats data could be done over SignalR. Note that this makes Web API somewhat obsolete: while Web API might be nice for programs (3rd party applications), SignalR is more tightly coupled to the whole system. In the end, I think offering both will be required (SignalR for "internal" graphs/stats/data that changes a lot/randomly or is just plain huge; Web API for data that does not change a lot or is not relevant to be kept live).
    • <b>D3</b> for the whole graphical department. I've looked into both PaperJS and D3 and think that D3 is more what we're looking for. PaperJS seems like it's more for manipulation of canvas/svg, of which it offers a more fine-grained definition. D3, on the other hand, seems to be specialised on data representation - so basically what we need mostly. I've played around with it and got some pretty stable results - it's pretty easy to create graphs, diagrams, charts, whatever one's heart desires. For starters, it seems like a better solution than PaperJS, but I could imagine that both are used later on (PaperJS for purely graphical stuff, D3 for everything related to data).

    Not mentioned yet, kinda optional but might make sense:

    • <b><a href="">TypeScript</a></b> instead of JavaScript. TypeScript is basically a type-safe ECMAScript that can be compiled to various versions of JS. Its tightly integrated in Visual Studio 2015 (with some support in 2013 too) and offers many advantages over normal JS, mainly its type safety and clear OOP approach. VS offers a really good Intellisense support for TS, something which it sometimes fails to do for JS. Workflow-wise, it's semi-easy to integrate into console projects (needs an additional buildstep to compile the files) and very easy to integrate into vNext web projects (complete tool chains already exist for auto-compilation upon changes, even during runtime). For development purposes, I suppose it could be possible to just wrap the server into such a project. The web part can then simply be included into the Eco server's standalone (self-host) HTTP server. TypeScript can be mixed with normal JS, it's also possible to create definitions (think of C-headers) for JavaScript files in order to use them "safely" with TS.
    • <b><a href="">AngularJS</a></b> (or similar; I could offer experience with AngularJS). A MVC framework in JavaScript/TypeScript that offers features like single page websites (the client loads only the template HTML pages (views), all other data is fetched through AJAX/SignalR) with various features such as routing, data binding, and more.

    The most common approach nowadays when using such technologies would probably be to use TypeScript for everything browser-scripting related, D3 for charts and anything related to displaying data, AngularJS for management of the views/controllers as well as client endpoint, SignalR/WebAPI for the server endpoint and OWIN to host the whole thing on the server inside an existing application.

    Maybe there are things that need to be changed depending on how Eco is built, but that can only be said once we get source access ;). For now, I'm working on a few test projects that use some of the technologies I haven't used much yet (such as SignalR and D3), to get a bit more comfortable with them. So far, I really like what I see though. I'll keep you up to date.

  • <blockquote> TypeScript instead of JavaScript.</blockquote>
    @RepeatPan TypeScript is intersting, but I'm always weary about using those types of languages that get "compiled" into Javascript (Coffeescript, Typescript, Dart). There are plenty of people out there that understand Javascript well enough that this sort of thing isn't necessary. Typescript would also introduce more complexity into the build process. The nature of angular 1.X doesn't really require it as everything can be broken down into separate "modules" thus adding proper namespacing.

  • Has anybody used ember and angular? Not having used angular, I kind of prefer the way ember works.

  • @PatchworkKnight I'm fairly experienced with Angular. It's magic. I haven't used Ember though.

  • I've got some experience with Angular, probably a lot more after the next sprint at work.

    SchlongFry: As already discussed in #general, TypeScript is different: It's a superset of JavaScript (i.e. any valid JavaScript is valid TypeScript). It doesn't try to be an edgy new language, but rather extends JavaScript by some direly needed things (type safety, classes, interfaces, a few more bits). The additional build step is literally just a post-build step if at all - VS has the ability to compile .ts every time you save them. Even then, another project for VS2015 vNext stuff has all the goodies that modern JS applications need (support for bower, NPM and grunt/gulp). I'm fairly certain that the workflow could be very much streamlined.

    In the end, this will improve Intellisense by a lot (as was one of the "requirements" by John), improve safety (because the compiler will be able to catch some errors already) and I firmly believe readability: Because TypeScript is "closer" to C# than normal JavaScript, it could be easier to learn than normal JS. At the very least, the concepts shouldn't strike anyone familiar with C# as extremely revolutionary (and vice versa).

  • well repeatpan i would say then from what you say that typeScript is the way to go =P

Log in to reply