While it’s easy to type emoji characters on a smartphone or tablet, it’s not as straightforward in most chat clients on a PC. When I’ve wanted to insert an emoji into a chat message from my PC, I’ve previously resorted to workarounds like googling the name of the emoji, finding a site with information about it, copying it to my clipboard from there, and pasting it into the chat. I decided to take the past couple of evenings and build a better solution. Introducing ClipEmoji.com, a site that allows you to copy emoji to the clipboard with a single mouse click! ClipEmoji.com also supports search-as-you-type, so you can quickly get a visual list of all emoji that contain “smile” as part of their name, for example. The emoji are actual unicode characters, not images, and will therefore vary in appearance depending on which platform they are being viewed on. The emoji names and keywords are from the Full Emoji Data chart at unicode.org and are used with permission. Transforming the unicode.org data for use on ClipEmoji.com, which I did with a one-off C# program that parsed the Full Emoji Data HTML and constructed the ClipEmoji.com HTML, was educational. I learned that many emoji “characters” are actually composed of combinations of multiple raw characters – so, for example, when grabbing an emoji character from a string, assuming that the emoji character has a string length of 1 is a mistake! Feel free to bookmark and share ClipEmoji.com if you find it useful, and to let me know if you have any questions or suggestions! 😁
Apple announced last week that the new iPhone 7 won’t have a “standard” 3.5mm stereo headphone jack; it’ll only have a Lightning port. To offset this, the iPhone 7 is shipping with a dongle that allows 3.5mm headphones to be plugged into the lightning port, as well as with a pair of earbuds that have a Lightning connection instead of a 3.5mm. Although I’ve only ever owned Apple iOS smart devices up to this point – and I’ve even developed and published my own successful iOS-exclusive RPG game on the App Store – the lack of a standard 3.5mm jack on the iPhone 7 is unfortunately a deal-breaker for me personally. Here are my reasons why some of the arguments that I’ve heard for why the iPhone 7 having no standard audio jack is no big deal don’t resonate well with me. Apple is giving you a dongle! Just use that! I have several different pairs of earbuds that I use with my phone, so just keeping the dongle permanently attached to one pair isn't an option. I could buy a dongle for every pair of earbuds I own, but in addition to being somewhat expensive and annoying, it would be a hassle to remove (and keep track of) the dongles when I want to use the earbuds with a non-iPhone device. Keeping a dongle always attached to my phone isn't going to happen. I don't want it always dangling, and when I remove to charge the phone it might get lost. Finally, I already did the thing where I need to use a dongle to plug in headphones via a device’s power port: With the Gameboy Advance SP, back in 2003. I found the experience to be pretty annoying back then. Apple is giving you Lightning-connector earbuds! Just use those! I often plug my earbuds into devices besides my phone, so replacing all of my earbuds with lightning-connector versions isn't an option. Here’s a partial list: - My PC
- My iPod Nano (which I still use for listening to live sports broadcasts on local FM radio)
- My Nintendo DS
- The built-in jack in airplane seats (for watching live sports or recent movies)
- The built-in jack in the treadmill-with-TV at my gym
Further, I’m not interested in owning two distinct sets of earbuds, one set that works with 3.5mm connections, and one set that works with the iPhone. (I own multiple pairs of fairly inexpensive earbuds, because over time I’ve learned that I tend to misplace them frequently.) I’d much rather be in a position where any pair of earbuds I own will work with any audio jack that I own, or might encounter. The iPhone works with Bluetooth earbuds! Wireless is better anyway! I don't like wireless / Bluetooth earphones much, as I have found that for me, the hassle of needing to keep them charged outweighs the benefit of not having a wire. I’ve found that having to manually re-pair them often (when I use the same set of earphones with multiple devices) can be annoying as well. But the 3.5mm standard is antiquated! Stop living in the past! Sure, it absolutely makes sense to replace old technologies when superior replacements become available. In this case, though, I’d argue that for most practical purposes Lightning (for audio connections) is an inferior alternative to 3.5mm audio. Lightning offers no discernable improvement in sound quality, and forces the use of workarounds like the dongle, in contrast to the “it just works” of the 3.5mm stereo standard. So, I will not be buying an iPhone 7. My tentative plan is to replace my current iPhone 5S with an iPhone 6S – which works with all standard earbuds, no dongle needed! – at some point in the next year or two. Hopefully that’ll last me until at least 2019, at which point I’ll decide what to do next – including maybe making the painful jump away from my accumulated iOS software library to Android, Windows Phone, or whatever other future alternative might be available.
I recently did troubleshooting for, and managed to successfully fix, an issue where HTTPS connections to a specific remote server were failing to be made successfully. The client computers affected by the issue were a pair of servers, running Windows 2012 R2 and Windows 2008 R2, respectively. For the purposes of this post, I’ll use https://tls.example.com as the URL of the remote server. The Problem Symptom 1: In a C# program, an attempt to establish an HTTPS (SSL / TLS) connection to https://tls.example.com failed. Error message: “The request was aborted: Could not create SSL/TLS secure channel.” - The program did work fine to make connections to all other HTTPS URLs that we had tried.
- The exact same C# program worked fine when I ran it from my local workstation as the client PC (connecting to the same https://tls.example.com remote server).
Symptom 2: In Internet Explorer 11, attempting to connect to https://tls.example.com failed. Error message: “Turn on TLS 1.0, TLS 1.1, and TLS 1.2 in Advanced settings and try connecting to again. If this error persists, contact your site administrator.” - However, connecting to https://tls.example.com using the Chrome browser from that same client PC worked fine.
- Connecting to https://tls.example.com from my local workstation using Internet Explorer 11 also worked fine.
The Solution Note: This solution will only help if the remote server is configured with an SSL key that has an ECDSA (not RSA) signature, but all of the the cipher_suites that the client PC is configured to support are RSA (not ECDSA). Note 2: If you’re reading this post after August 2016, check and make sure the new cipher_suites value that you add is one that’s still cryptographically valid. These things tend to change over time! Note 3: Don’t use Registry Editor (as suggested here) unless you know what you’re doing. It can permanently damage your PC. In my case, the problem was caused by there being no match between the set of cipher_suites supported by the client, and the set of values that the server was able to accept. Specifically, in my case, the server had an SSL key signed with ECDSA (not RSA), and my problematic client PCs were configured to use only ECDSA (not RSA) cipher_suites. This caused SSL handshaking to fail after the initial “Client Hello” step. I was able to fix this by adding a ECDSA value to my client PC’s set of cipher_suites: On the client PC: - Open the Registry Editor.
- Navigate to HKLM/SOFTWARE/Policies/Microsoft/Cryptography/Configuration/SSL/0010002
- Edit the existing comma-separated value, and add a new value to the end that’s supported by the client OS, is cryptographically secure, and works with a key with an ECDSA signature. The value I used: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256_P256
- Reboot.
Investigation Details The remainder of this post details the investigation that led me to the above solution. SSL / TLS protocol mismatch? I’ve run into SSL handshaking problems before caused by a protocol mismatch. For example, the client specified that it would only connect using SSL 3.0 or TLS 1.0, but the server would only accept TLS 1.2. However, that did not seem to be the cause of the issue here (despite the Internet Explorer error message): - In my C# program, I was specifying that the client accept any of TLS 1.2 | TLS 1.1 | TLS 1.0.
- In Internet Explorer’s Advanced Options dialog, I confirmed that the checkboxes for TLS 1.2, TLS 1.1, and TLS 1.0 were all already checked (again, despite the error message).
- In Firefox, by clicking on the green lock icon in the address bar after successfully connecting to the remote website, I confirmed that the connection was secured using TLS 1.2.
As far as I could tell, both the client and server should be agreeing on the use of TLS 1.2. Thus, probably not a protocol mismatch issue. SSL certificate trust chain issue? When I asked myself the question “So what’s different between my local PC (where things work fine) and my server PCs (not working)?”, the first answer I came up with was, maybe the installed trusted SSL root certificates? However, that theory turned out to be a dead end in this case. When I used the “Manage server certificates” / “certlm” tool to look at the installed certificates on my PCs at Certificates > Trusted Root Certification Authorities, although there were some differences between the root certs on my local Windows 10 PC versus the root certs installed on the Windows Server 2012 R2 PC, that didn’t turn out to be the cause of the problem. Additional symptom: System event log error My first clue to the actual problem was a Windows System event log error that I noticed would be logged whenever I reproduced the HTTPS connection failure in Internet Explorer or my custom C# program: “A fatal alert was received from the remote endpoint. The TLS protocol defined fatal alert code is 40.” A helpful MDSN blog post defined that error code of 40 as “handshake_failure”. Network traffic sniffing using Microsoft Message Analyzer As suggested by another very helpful Microsoft blog post, I installed Microsoft Message Analyzer. (It turns out that I needed to install the 64-bit version of Analyzer to match my OS, even though as far as I know, browsers typically run as 32-bit processes.) Using Message Analyzer turned out to be easy. I just did the following: - In Analyzer, hit the “New Session” button;
- Selected “Local Network Interfaces”;
- Hit Start;
- Switched windows to my C# program, and reproduced the issue;
- Switched back to Analyzer, and hit the Stop button.
I filtered out all irrelevant events captured while my session was running by applying this filter: (*Source == "www.example.com" or *Destination == "www.example.com") and *Summary contains "Handshake" (Where both instances of “www.example.com” were replaced with the actual host to which I was connecting.) On my local PC where the HTTPS connection was working, the Message Analyzer results included a “Handshake: [Client Hello]” message originating from my local PC, followed by a “Handshake: [Server Hello]” originating from the server. However, on the Windows Server 2012 R2 machine where the the connection was failing, I could see the “Handshake: [Client Hello]” from the local machine was followed by an “Alert” reply from the server! Doing a right-click | Show Details on the Alert reply, I could see that it contained a body message of “Level 2, Description 40”. This reply must have been what the System Event Log was picking up to generate that message that I’d noticed earlier. Comparing the successful and unsuccessful Client Hello messages At this point, I’d narrowed down the difference between the succeeding and failing environments to the differing server replies to the initial “Client Hello” step of SSL handshake. Still in Message Analyzer, I did another Show Details to compare the contents of the “Client Hello” on my Windows 10 PC (working) and my Windows Server 2012 R2 machine (not working). The significant difference turned out to be the cipher_suites parameter in the body of each PC’s “Client Hello” message. As I learned, the cipher_suites parameter contains the list of encryption settings which the PC sending the message is able to handle. The idea is that the server picks the one from that list that it prefers, sends a “Server Hello” reply that includes the selected cipher suite, and the two sides use that to securely communicate. It turns out that while my Windows 10 PC (working) was sending a selection of 33 cipher_suites values that it was able to support, the Server 2012 R2 PC (not working) was sending only 11 cipher_suites values! Each cipher_suites value, while it appears in the raw message body as an integer, “translates” to a descriptive string value like: TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA256. (Message Analyzer helpfully performs this translation when displaying the values in the cipher_suites value under the “body” value, as is mostly visible in the screenshot above.) The Microsoft article Cipher Suites in TLS/SSL provides a very helpful picture of what the parts of those cipher_suites values mean, which I’ll borrow and display here: Taking a closer look, the 33 cipher_suites values from the Client Hello message Windows 10 PC (working) included a mix of cipher_suites values contained a mix of RSA, DHE, and ECDSA as the Signature value. The 11 values from the Server 2012 R2 PC (not working) all had RSA as the Signature value! A Certificate Signing Algorithm Mismatch? Discovering that the not-working Server 2012 R2 PC was effectively saying that it would only support RSA as the cert signing method immediately suggested a new likely theory: If the server cert was signed with something other than RSA, the SSL handshaking would fail. Sure enough, drilling further down into the cert details in Firefox showed that the cert was signed with not RSA, but ECDSA: In essence, the failing SSL handshaking conversation was going like this: - Client [Client Hello]: Hey, let’s talk securely, using any of these methods (…), as long as you’ve got an RSA-signed cert!
- Server [Alert]: Sorry, nope, I can’t do business along those parameters. Bye!
Getting the Server 2012 PC to accept an ECDSA certificate A great blog post by Nartac Software on how their IIS Crypto tool works pointed me to the solution. A Windows registry key mentioned in that article contained the same set of cipher_suites values that I was seeing in the problem PC’s Client Hello SSL handshake message: HKLM\SOFTWARE\Policies\Microsoft\Cryptography\Configuration\SSL\00010002
In the Server Hello SSL handshake message on my working Windows 10 PC, I could see that the cipher_suites value that the server had selected to successfully communicate with was:
TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
From that same article, another registry location has the list of supported cipher suites on the server:
HKLM\SYSTEM\CurrentControlSet\Control\Cryptography\Configuration\Local\SSL\00010002
Looking in that registry location on the Server 2012 R2 PC, I saw that one of the supported values was
TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256_P256
With the cipher suite portion of that key being a match for the accepted value that had been accepted by the server in the SSL handshake from my Windows 10 PC, I edited the comma-separated list of cipher suite values from the first 00010002 registry key above to include this additional cipher key value. Finally, I rebooted the Server 2012 R2 PC (since a reboot is required to make the change take effect).
After the reboot, the problems were solved! Internet Explorer was successfully able to connect to the target website, and my C# app was also able to successfully establish an HTTPS connection.
So how had this happened?
I posed the question to the failing client PCs’ hosting provider: Are Windows Server 2008 R2 and Windows Server 2012 R2 machines configured by default to only accept RSA SSL certs, or is this something that the hosting provider configures in their “default” images?
The answer, it turned out, was neither of the above. Instead, the missing non-RSA cipher suite values had been intentionally removed in an “server hardening” task performed some time in the past. This probably made sense originally, under an assumption that these servers would never themselves be acting as the client side of an HTTPS connection, and therefore for the sake of reducing attack surface, could have cipher suites with signature types other than the servers’ own cert signatures disabled.
I recently worked through a situation where a bulk upsert (update if a matching record already exists; otherwise insert a new record) to SQL Server implemented using Entity Framework 6 was producing the correct results, but running very slowly. I was able to improve the performance of the bulk upsert by about 600x (!!!) by replacing the Entity Framework implementation with a an new approach based on a stored procedure taking a table-valued parameter. Entity Framework implementation – very slow The original code, using Entity Framework 6 (altered to use a table / object type of “Employee” instead of the actual type I was working with): DateTime now = DateTime.Now;
using (MyCustomContext context = new MyCustomContext())
{
foreach (Employee employee in employeeData)
{
// Get the matching Employee record from the database, if there is one.
Employee employeeRecord = context.Employees
.Where(e => e.EmployeeID == employee.EmployeeID)
.SingleOrDefault();
bool isNewRecord = false;
if (employeeRecord == null)
{
// We don't have a record in the database for this employee yet,
// so we'll add a new one.
isNewRecord = true;
employeeRecord = new Employee()
{
EmployeeID = employee.EmployeeID,
CreationDate = now
};
}
employeeRecord.ModificationDate = now;
employeeRecord.FirstName = employee.FirstName;
// (Set the remaining attributes...)
if (isNewRecord)
{
context.Employees.Add(employeeRecord);
}
context.SaveChanges();
}
}
That approach was taking around 300 seconds per 1000 records inserted. It was bogging down my application in situations where I needed to insert/update tens of thousands of records at once.
Stored Procedure + Table-Valued Parameter implementation – fast!
In my testing, the revised approach below, replacing the Entity Framework implementation with a stored procedure taking a table-valued parameter, was able to upsert the same 1000 records in 0.5 second – a huge improvement!
I needed to add a custom table type with columns matching the Employees table:
CREATE TYPE EmployeesTableType AS TABLE
(
[EmployeeID] [int] NOT NULL,
[CreationDate] [datetime] NOT NULL,
[FirstName] [nvarchar(100)] NOT NULL,
/* More fields here... */
);
Next, the stored procedure to perform the upsert, taking input of an instance of the custom EmployeesTableType, and using the SQL Server MERGE command:
CREATE PROCEDURE dbo.UpsertEmployees
@UpdateRecords dbo.EmployeesTableType READONLY
AS
BEGIN
MERGE INTO Employees AS Target
USING @UpdateRecords AS Source
ON Target.EmployeeID = Source.EmployeeID
WHEN MATCHED THEN
UPDATE SET Target.EmployeeID = Source.EmployeeID,
Target.FirstName = Source.FirstName,
/* More fields here... */
WHEN NOT MATCHED THEN INSERT (EmployeeID,
FirstName,
/* More field names here... */
)
VALUES (Source.EmployeeID,
Source.FirstName,
/* More field values here... */
);
END
Finally, here’s the C# code I used to execute the stored procedure:
public static void UpsertEmployeeData(List<Employee> employees)
{
string connectionString = MyCustomMethodToGetConnectionString();
using(SqlConnection connection = new SqlConnection(connectionString))
{
connection.Open();
using (SqlCommand command = connection.CreateCommand())
{
command.CommandText = "dbo.UpsertEmployees";
command.CommandType = CommandType.StoredProcedure;
SqlParameter parameter = command.Parameters.AddWithValue("@UpdateRecords",
CreateUpdateRecordsSqlParam(employees)); // See implementation below
parameter.SqlDbType = SqlDbType.Structured;
parameter.TypeName = "dbo.EmployeesTableType";
command.ExecuteNonQuery();
}
}
}
private static DataTable CreateUpdateRecordsSqlParam(IEnumerable employees)
{
DataTable table = new DataTable();
table.Columns.Add("EmployeeID", typeof(Int32));
table.Columns.Add("FirstName", typeof(string));
// More fields here...
foreach (Employee employee in employees)
{
table.Rows.Add(employee.EmployeeID,
employee.FirstName,
// More fields here...
}
return table;
}
Ideally, this post will serve as a useful reference the next time I, or you, need to code up a bulk upsert into SQL Server!
Credit to this StackOverflow answer by Ryan Prechel which was the primary inspiration for this approach.
Today, while creating a C# unit test, I had a situation where I needed to set the value of a private variable on the class under test. The variable was an enum-type variable, where the enum type itself was a private inner type defined in the class under test. (I won’t in this post get into why I ended up landing on doing this, instead of some other solution such as refactoring the actual class under test in order to avoid the need to do so, as that would be a separate discussion.) This post describes the technique using reflection that I ended up using to accomplish this. For example, given a class defined like: class Card
{
private Suit _suit;
private enum Suit
{
Hearts,
Diamonds,
Clubs,
Spades
}
}
Given an instance of that class named card , I was able to set the value of that object's private _suit variable by doing:
using System.Reflection;
...
// Get a reference to the _suit member variable.
FieldInfo suitField = card.GetType().GetField("_suit", BindingFlags.NonPublic | BindingFlags.Instance);
// Get a reference to the Spades member of the Suit enum.
object spades = card.GetType().GetNestedType("Suit", BindingFlags.NonPublic).GetField("Spades").GetValue(card);
// Set the value of _suit on our card object to Spades.
suitField (card, spades);
This kind of reflection hackery probably isn’t a best practice in most situations! Still, I thought it might be helpful to record here as an aid for those rare situations where it does make sense to manipulate this kind of enum-type private variable for unit testing purposes.
Problem When using the curl command-line utility to manually sent an HTTP POST to a server, the “data” value specified in the message is unexpectedly truncated when the server receives it. For example, given this command line: curl --request POST "https://www.myserver.example.com/api/submit" --header "Content-Length:115" --header "Accept-Language:en-us" --header "Host:www.myserver.example.com" --header "Accept:image/jpeg, application/x-ms-application, image/gif, application/xaml+xml, image/pjpeg, application/x-ms-xbap, application/x-shockwave-flash, application/vnd.ms-excel, application/vnd.ms-powerpoint, application/msword, */*" --header "User-Agent:Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1)" --header "Connection:Keep-Alive" --header "Cookie:ASP.NET_SessionId=some_sesson_token_here;" --header "Referer:https://www.myserver.example.com/" --header "Content-Type:application/x-www-form-urlencoded; Charset=UTF-8" --data "PrimaryID=719&SecondaryID=27483&email1=someone@example.com&email2=someone2@example.com&message=Visit+my+site+at+http://mysite.example.com&TertiaryID=1738242&subject=Subject+text+goes+here" The server (an ASP.NET MVC application) received the message, but the “message” parameter was assigned a value of “Visit my site at http” – the “://mysite.example.com” portion of the value was missing. The subsequent included parameters in the “--data” value, such as TertiaryID, were also completely missing their values, according to the server. Solution The problem is the Content-Length header value (from earlier in the command line). As written, it has a value of 115, so the server is truncating the data value after 115 characters (which happened to be just after the “http” in the “message” parameter in this example). The solution is to either set the Content-Length value to the actual length of the data value, or to just omit the Content-Length header entirely. Thanks to my colleague Kevin for pointing that out and saving my sanity! I had originally approached this problem assuming that the “://” was the problem – that curl wasn’t sending it correctly, and/or that the server was refusing it (possibly for security reasons) – but that turned out to be just a red herring, based on the fact that the incorrect Content-Length value just happened to fall near that substring’s position.
Sometime in mid-2013, I had a hankering to play a particular kind of RPG on my iPhone. I wanted a game with these features: - Turn-based combat.
- Portrait orientation, and thus playable with one hand. (e.g. while eating with the other hand.)
- A single protagonist/hero. One thing I don't like about party-based RPGs is that typically, a couple of your party members need be KO’ed before you feel like the team is actually in any real danger. This doesn't tend to happen against non-boss enemies in most games, and thus those games often end up feeling uninteresting for long stretches.
- Interesting decision-making in combat -- even vs. non-boss enemies -- something beyond the typical RPG trope of "do basic attacks / target enemy elemental weaknesses / heal self when injured / repeat."
- No hard-to-use on-screen virtual D-pad for character movement. Give me a way to move my character that’s designed especially for a touchscreen, not one based on a traditional physical controller’s tactile D-pad!
- A combat system built around LOW numbers and visible enemy HP / stats, so I can calculate that if, for example, that enemy has 9 HP left, then I can perfectly finish it off by doing my 4 and 5 HP attacks respectively over the next 2 rounds.
- FAST combat. No waiting on long combat animations; no wading through multiple menus to kick off a combat round. This is my phone; let me whip it out when I’ve got 30 seconds, and actually accomplish something quickly.
- No save points. Why not just always keep my game saved automatically? (Even mid-combat!)
- Game designed with a goal of fun, not of corporate revenue generation! Absolutely no IAPs or premium currencies or ads or stamina timers.
I couldn't find that game on the App Store. So... I decided to write it myself! After spending most of my evenings between 10:00pm and midnight (after my day job, spending time with my family, getting the kids into bed, and daily chores) for about 18 months designing and writing the game – learning the Objective-C programming language and the whole MacOS / iOS development ecosystem along the way – Vigil RPG was released in November 2014! | Here’s Vigil RPG’s combat screen, which illustrates the realization a lot of the points noted above that I wanted to achieve with the game. You can check out more screen shots and info about the game at the Vigil RPG website! | Lifetime App Store Sales Stats I don't really have any reason to keep them private, and I thought it might be insightful for other #indiedev folks and industry observers, so without further ado, here are the lifetime sales statistics to date for Vigil RPG (iOS)! According to my developer account at iTunes Connect: - Released November 2014 at a price of US $2.99
- 354 paid copies sold, almost entirely at $2.99, with a few at $1.99 in a "birthday sale" in November 2015
- Total gross sales: US $1004
- About 70% of the lifetime sales of Vigil RPG came in the first 30 days after release.
- Vigil RPG got about ten 5-out-of-5-star community reviews on the App Store (and no 0-through-4-star reviews) immediately after release; it’s gotten zero community reviews since then. (Vigil RPG has no “review nag” prompts, which was an intentional design decision.)
- The second big spike in sales was after the 4-out-of-5-star TouchArcade review (which I was thrilled with, and found to be extremely on-point and fair – much respect to the reviewer, Shaun Musgrave). TouchArcade was the only major site to do a review.
- The little spike in November 2015 was the beginning of the $1.99 sale. Sales dropped off again rapidly even though I left the price at $1.99 for a while.
- Outside of the initial release and $1.99 sale periods, Vigil RPG sold at a rate of roughly 1 copy per week.
- Net proceeds after Apple's cut: US $707
- 3 x $US 99 of Apple annual developer licenses to develop the game and keep it live on the App Store = $297. Net proceeds after Apple dev license fees: $410
- Other misc. operating costs -- State of Michigan incorporation fees for Aggro Magnet Games LLC, web hosting for http://aggromagnetgames.com -- of around $100 to date. Bottom line proceeds to date: About $310
- 122 free copies redeemed (promo codes sent to review sites; a few free giveaways to try and drum up visibility and community interest)
- I didn’t bother trying to keep any stats on piracy rates, but at least one site out there (fairly readily findable via Google search) has the binary of the game posted for free download.
Given a very very rough estimate of about 600 hours spent creating the game, $310 in net profit works out to a wage of about $0.50/hour. Not exactly enough to quit the ol’ day job! (Fortunately, I already have a day job which I love!) I am, however, honestly totally fine with that performance. I made an intentional decision up front for my goal for the Vigil RPG project to be to "make the game I wanted to play" – with no design compromises being made for the sake of monetization. So no IAPs, no ads, no other typical "freemium" features (or “anti-features,” as the case may be) such as premium currencies or stamina timers. $0.99 Sale Consistent with my initial goal for Vigil RPG of prioritizing fun over profits, as of today, for the first time ever, the App Store price for Vigil RPG is reduced to $0.99! I’m hopeful that this will allow more people to enjoy the game – assuming there’s a segment of folks out there who are interested in iPhone RPGs, and are unwilling or unable to buy the game at the $2.99 price point, but will go ahead and pick it up for $0.99. The main reason I didn't just cut the price all the way down to $0.00 (free) was that admittedly there's somewhat more cachet in being able to say "The game I made is for sale on the App Store!" than "I made a game and I'm giving it away on the App Store since no one was really buying it!" It would also be nice if Vigil RPG’s proceeds would at least cover the annual $99 that Apple requires to keep it listed on the App Store. To that end, I might bump the price back to the original $2.99 at some point if sales at the $0.99 price point don’t generate much increased volume relative to the 1 sale/week or so of the $2.99 price. “Buy It Now!” Hopefully this detailed peek into one game’s iOS App Store performance was helpful, or at least mildly interesting! If you’d like read more about the gameplay of Vigil RPG, you can do so on the Vigil RPG website. Or, you can check out the full 5-to-10-hour adventure firsthand via Vigil RPG on the App Store if you’ve got an iOS device, and can scrape together enough loose change to join the exclusive club of premium iOS game owners! You can also hit me up with any questions you’ve got on Twitter at @AggroMagnetGame, or below in the comments!
One of the things I did to verify that my newly-built home PC was working well was to download and run a CPU temperature monitoring program, and leave that open in the secondary monitor while running programs in the primary monitor. Unfortunately, this pretty quickly turned up problems. The CPU, an Intel Core i7-4790K, would get dangerously hot when running certain games. The game Cities: Skylines exhibited the worst symptoms: After running the game for just a minute or two, although the game itself would run great, the CPU temperature (as reported by the temperature monitor program) would shoot up to nearly 100 degrees C! That’s close to the point where the PC will shut itself off to avoid damage, and much hotter than I would expect. I thought the problem might be due to my having done a poor job applying the thermal paste to my CPU and/or installing the stock heatsink incorrectly, so I removed the heatsink, carefully cleaned off the old thermal paste, applied new thermal paste, and reinstalled the heatsink. After doing that, though, the CPU temperatures while playing Cities were still extremely hot. At this point, on the advice of some of the friendly folks at the Gamers With Jobs community, I decided to throw some hardware at the problem, in the form of a US $29 Cooler Master Hyper 212 EVO heatsink! I’d previously never bothered with “premium” heatsinks, since I don’t overclock my systems (valuing rock-solid stability over an incremental speed boost). In this case, though, it seemed like the best option to protect the $340 investment I’d made in my nice CPU. I’m very happy to report that it worked perfectly! With the 212 EVO installed (replacing the stock Intel heatsink, and with another fresh application of thermal paste), CPU temperatures while playing Cities: Skylines dropped from nearly 100 C down to the mid-40s C! One caveat that I discovered with the 212 EVO though is that it fastens to the motherboard from both sides, effectively pressure-clamping itself down onto the CPU. Therefore, I needed to unscrew my motherboard from the case to install the new heatsink. If you’re doing a build that includes a premium heatsink, I suggest installing the heatsink and CPU onto the motherboard before screwing the motherboard into the case! A final additional purchase I made was a cheap ($8) case exhaust fan, since the case I used didn’t come with one. I didn’t want the air inside my case warming up over periods of long computer use. CPU cases evidently come with quite a few variants of screw mounting hole spacings. The distance between screw holes is not, as it turns out, the size of fan that you should order! I found the chart on this quietpc.com page very useful (and accurate!) in translating the space between mounting holes that I measured on the back of my case to the size of fan that I needed to order.
With my family's primary home PC having been built in 2008 and showing its age, it was time earlier this month to build my first general-use home PC in 8 years! Here's the parts list I put together and built, with a somewhat-flexible budget of around $1200: I had a spare case and existing mouse / keyboard / monitor / speakers to use with this build, so I didn't need to factor those in. The website pcpartpicker.com (the target of all of the links in the parts list above) was a particularly helpful tool in keeping track of the parts for this build as I was researching and selecting them! It was a nice upgrade over the text file and/or spreadsheet-based systems I’ve used for this in the past. Photos! In chronological order of how I executed the build! The goods, prior to the build. (Note: The LEGO blocks pictured were not actually included in the build.) The empty case – with its circa 2006 350W power supply with only IDE power cables, no SATA, removed – ready for components! Motherboard in place and screwed down! The Intel Core i7 CPU, still in packaging. That’s a lot of power packed into a small package! Close-up of CPU installed in motherboard. CPU locked into place (via the lock included as part of the motherboard). CPU with thermal paste applied and then fan installed on top. New power supply installed in case and wired up to motherboard. It isn’t very visible in this photo, but the CPU wires for the power switch, reset switch, and the front USB 2.0 ports are also wired to the motherboard in this photo, on the bottom edge. The solid state drive (SSD) and traditional hard disk drive (HDD) drive side-by-side. Even though this wasn’t my first SSD install, I was surprised anew just how small, thin and light that drive is compared to the traditional HDD. Drives installed in case. This older case didn’t have a spot designed to accommodate the SSD, but that drive was small and light enough that I was comfortable with just screwing one side of it into the 3.5” HDD bay (at an angle to get the screw holes to line up!). The optical (CD/DVD) drive is also installed in the top 5.25” slot. The G.Skill Ripjaws 2 x 8GB RAM installed (just to the right of the big CPU fan). Not sure how much faster the fancy red trim makes it, but it does look cool! The GeForce GTX 970 video card, just out of its packaging. This thing is a beast, size-wise! Clearly EVGA wants it to look nice out of the box, since it came with clear plastic wrap over the entire thing (a couple of pieces of which are still on and visible in this photo, such as the piece over the “GeForce GTX 970” logo on the bottom edge). Video card installed. It turned out to be juuust big enough in this case that I couldn’t quite install a full size hard drive directly across from it (even using a 90-degree SATA cable). Finally, everything assembled, with the cover on over the rear-facing motherboard ports. Both the motherboard and video card came with caps installed over their video ports (as shown here), which I appreciated. Mishaps and Mistakes DOA HDD So with everything assembled and monitor, keyboard, mouse, power, network, and speakers all plugged in, I hit the power button for the first time… and immediately noticed two obviously “unhappy” sounds: - A buzzing-type sound coming from the bottom portion of the case;
- A repetitive squealing / grinding sound coming from the front of the case.
The first sound turned out to be easily solvable; one of the case wires at the bottom of the case was contacting the spinning fan on the underside of the video card. Getting those wires out of the way solved that. Unfortunately, the second noise turned out to be the sound of a dead hard drive. The noise was coming from the 2TB HDD. It wouldn’t stop making the noise, and the drive wasn’t recognized by the BIOS (whereas the SSD and the optical drive were recognized with no issues). This was my first DOA (“dead on arrival”) part among the five PC builds I’ve done, so I suppose I was due. I got it returned and refunded with no issues… and ended up breaking my budget a bit by replacing it with both a fast 7200 RPM 1 TB drive for installing programs, and a 5600 RPM 4 TB drive for storing all the great photos my wife takes. Forgotten Thermal Paste Fortunately, I didn’t forget to apply thermal paste, which might have resulted in a cooked CPU. Rather, I forgot to order thermal paste. None came with the CPU I bought, and I only do PC builds infrequently (every few years) and so didn’t have any on hand. Also fortunately, my town has a little PC repair shop, and so one quick car trip and $1 later I was good to go with a single-use tube of thermal paste. Don’t Leave the Motherboard Backplate For Last With this build, I made the embarrassing mistake of leaving the install of the motherboard backplate (which ends up situated over all of the ports on the back of the PC) for last, thinking for whatever reason that I could pop it into place from the rear. However, I realized the hard way that the backplate does not install from the rear; rather, it needs to go on from the inside of the case, before the motherboard gets fastened into place. So, to my chagrin, I ended up unscrewing the motherboard from the case (leaving everything else connected), shifting it slightly to allow the backplate to be installed, and then positioning it back into place and screwing it back in. And so, oddly enough, this build came full circle, in that affixing the motherboard to the case was both the first and the last step! So How’s it Working? It’s working great! I haven’t measured it yet, but Windows 10 boots incredibly fast. The app where I’ve seen the most difference relative to my old PC is the game Cities: Skylines (the latest and greatest take on the city-builder genre pioneered by SimCity). On the old PC, saved games would take a very long time to load, and the game itself was quite playable but it would “chug” noticeably, particularly when trying to rapidly scroll around the map. On the new PC, by contrast, it’s super fast and smooth as silk! After doing “main home PC” replacements every 4 years (in 2000, 2004, and 2008), that 2008 model lasted for a solid 8 years. (It’s actually still running, although there are signs it might be on its last legs, which helped prompt this upgrade). I’m hoping this new PC has a nice long life as well! I’d like it to still be running in 8 years… at which point my 10-year-old will be headed off to college. Now there’s an interesting thought!
I was working on a situation this morning where, as part of a C# unit test method using Entity Framework 6 for database access, I was inserting about 1000 records into a SQL Server database. This test was taking around 30-35 seconds to run, and I wanted to speed it up. I tried re-coding the program to do the inserts using raw SQL, and that sped things up by about a factor of 3. Thanks to a tip I found in a StackOverflow answer by “Steve”, I was able to get the same perf increase using Entity Framework by simply disabling AutoDetectChangesEnabled: mycontext.Configuration.AutoDetectChangesEnabled = false; Doing that before the loop with my calls to .Add() sped up the EF code to the point where it was performing just as well as the raw SQL. So, in situations where you’re using EF to do lots of inserts and you don’t need AutoDetectChangesEnabled (because you’re not also doing any updates to existing records), try turning it off for a possible nice performance improvement. More info on this from Microsoft EF team member Arthur Vickers: Secrets of DetectChanges Part 3: Switching off automatic DetectChanges
When creating a RESTful web service with a GET method accepts a variable-length list of parameters, it can happen that URLs generated to call the service – including the query string containing the paremters – can end up being very long. I’m working on a REST method with a parameter that accepts a comma-delimited list of up to 2000 ID values. With ID values being up to 7 characters in length, the URL for a request with the maximum 2000 7-character ID values, plus an 8th comma separator character after each ID value, ends up being 16000+ charaters long. For example: http://mysite.example.com/api/products&productIDs=1000001,1000002,100003,…many more IDs here!…,1001999,1002000
After running into multiple obstacles trying to get my ASP.NET application running on IIS to successfully accept such incoming long request URLs without throwing errors, I’ve come to the conclusion that in situations like this, it’s better to have the API method accept HTTP POST instead of HTTP GET, and have the client pass the list of parameters in the message body intead of in the URL. This approach aligns with answers on Stackoverflow to the question Design RESTful GET API with a long list of query parameters.
However, I figured I’d go ahead and post the data I collected while troubleshooting the various errors that ASP.NET and IIS can return when a request with a long URL is submitted, in case I need this information again in the future, or in case it might help anyone else.
In all cases, the specific configured values (e.g. ”65535”) should be tailored to the specific needs of your application. Be aware that setting these values could have adverse security consequences for your application, as a large HTTP request submitted by an attacker won’t be rejected early in the pipeline as it would normally.
All of this investigation was done with an ASP.NET application targeting .NET Framework 4.5.1, running on IIS 10, on a Windows 10 64-bit PC.
Symptom 1
Response HTTP status code: HTTP 414
Response body: HTTP Error 414. The request URL is too long.
Relevant response header: Server: Microsoft-HTTPAPI/2.0
Fix 1
In the Windows Registry, at Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\HTTP\Parameters, create a DWORD-type value with name MaxFieldLength and value sufficiently large, e.g. 65535.
Note: This error is actually thrown by http.sys, before the request even gets passed along to IIS in the request-handling pipeline. Thus, web.config settings aren’t able to address this particluar error. See article Check the “Server” HTTP header for the source of an error returned from IIS.
If you decide to make this change, then obviously it’ll need to be made in all environments (including all production server(s)) -- not just on your local dev PC. Also, whatever script and/or documentation your team uses to set up new server instances will need to be updated to include this registry setting, so that your team doesn’t forget to apply this setting 18 months from now when setting up a new production server. (This was a big reason that for the API I’m building, I opted to just scrap the long-GET-URL approach, and make my method a POST instead.)
References:
Symptom 2
Response HTTP status code: HTTP 404
Response body: (Empty)
Relevant response headers:
Server: Microsoft-IIS/10.0
X-Powered-By: ASP.NET
Fix 2
In web.config, add the following configuration (modifying existing elements when they are present, otherwise adding new elements):
<system.webserver>
<security>
<requestFiltering>
<requestLimits maxQueryString="65535" />
</requestFiltering>
</security>
</system.webServer>
References:
Note: In my ASP.NET solution, I needed to make this change in my root project’s web config. This setting was ignored when I added the change in my API sub-project’s web.config. (This however was not the case for the web.config change mentioned in “Fix 3” below.) Related MSDN article: ASP.NET Configuration File Hierarchy and Inheritance
Symptom 3
Response HTTP status code: HTTP 400
Relevant response body text snippets:
System.Web.HttpException: The length of the query string for this request exceeds the configured maxQueryStringLength value.
[HttpException (0x80004005): The length of the query string for this request exceeds the configured maxQueryStringLength value.] System.Web.HttpRequest.ValidateInputIfRequiredByConfig() +492 System.Web.PipelineStepManager.ValidateHelper(HttpContext context) +55
Relevant response headers:
Server: Microsoft-IIS/10.0
X-AspNet-Version: 4.0.30319
X-Powered-By: ASP.NET
Fix 3
In web.config, add the following configuration (modifying existing elements when they are present, otherwise adding new elements):
<system.web>
<httpRuntime maxRequestLength="65535" maxUrlLength="65535" maxQueryStringLength="65535" />
</system.web>
When an error response is returned from an HTTP request submitted to an IIS web server, the error response might actually be coming from http.sys (the “Hypertext Transfer Protocol Stack”), which processes incoming HTTP requests before they are passed along to IIS. You can determine the source of the error by looking at the “Server” HTTP header in the returned HTTP response. An error coming from http.sys will have the header: Server:Microsoft-HTTPAPI/2.0
An error coming from IIS will instead have a header like:
Server:Microsoft-IIS/XX.X
Here’s some handy C# code to dump all headers from a given WebResponse to the console:
for (int i = 0; i < response.Headers.Count; i++)
{
string header = response.Headers.GetKey(i);
string[] values = response.Headers.GetValues(i);
Console.WriteLine(header + ": " + String.Join(", ", values));
}
Bonus tip: For a bit more information on errors generated by http.sys, check out the log files in this folder on the web server PC:
%windir%\System32\LogFiles\HTTPERR
References:
|
|